Abstract
Group interactions are prevalent in a variety of areas. Many of them, including email exchanges, chemical reactions, and bitcoin transactions, are directional, and thus they are naturally modeled as directed hypergraphs, where each hyperarc consists of the set of source nodes and the set of destination nodes. For directed graphs, which are a special case of directed hypergraphs, reciprocity has played a key role as a fundamental graph statistic in revealing organizing principles of graphs and in solving graph learning tasks. For general directed hypergraphs, however, even no systematic measure of reciprocity has been developed. In this work, we investigate the reciprocity of 11 real-world hypergraphs. To this end, we first introduce eight axioms that any reasonable measure of reciprocity should satisfy. Second, we propose HyperRec, a family of principled measures of hypergraph reciprocity that satisfy all the axioms. Third, we develop FastHyperRec, a fast and exact algorithm for computing the measures. Fourth, using them, we examine 11 real-world hypergraphs and discover patterns that distinguish them from random hypergraphs. Lastly, we propose ReDi, an intuitive generative model for directed hypergraphs exhibiting the patterns.
Similar content being viewed by others
Notes
This work is an extended version of Kim et al. (2022), which was presented at the 22nd IEEE International Conference on Data Mining (ICDM 2022). In the extended version, we introduce several theoretical extensions: (a) generalized versions of the axioms in Sect. :3.1 and a proof of Theorem 1 for the generalized versions (Appendix 1), (b) seven baseline hypergraph reciprocity measures (Sect. 3.3), (c) a proof that none of the baseline measures satisfies all the axioms (Appendix 2), and (d) proofs of Theorem 2 and Corollary 1 (Appendix 1). In addition, we conduct additional experiments regarding (a) the efficiency of FastHyperRec (Fig. 4 and Table 3 in Sect. 3.4), (b) the statistical significance of Observation 1 (Table 8 in Sect. 4.2), (c) the robustness of HyperRec with respect to the choice of \(\alpha\) (Tables 7 and 9 in Sect. 4.2), and (d) the verification of Observation 2 in 12 more real and synthetic hypergraphs (Fig. 5 in Sect. 4.2 and Fig. 7 in Sect. 5.2). At last, we provide one additional reciprocal pattern in real-world hypergraph (Observation 3: Fig. 6 in Sect. 4.2) and verify whether ReDi can reproduce this pattern.
Note that all arcs in \(R_{i}\) are used in computing the reciprocity of \(e_{i}\), and thus it does not correspond to a search space.
Note that, to improve legibility, we remove data points that lie outside the interquartile range (i.e., \([Q_{1} - 1.5(Q_{3} - Q_{1}), Q_{3} + 1.5(Q_{3} - Q_{1})]\), where \(Q_{3}\) and \(Q_{1}\) denote the third and first quantile of the corresponding distribution) from the box plots.
The search space of \(\beta _{1}\) is (a) \([0.05, 0.1, \ldots , 0.6]\) for the small datasets where \(|V \vert \le 10^{4}\), and (b) \([0.001, 0.0015, \ldots ,0.005]\) for the dense large datasets where \(|V \vert > 10^{4}\) and \({|E \vert }/{|V \vert } \ge 3\), and (c) \([0.01, 0.02, \ldots 0.15]\) for the other sparse large datasets. The search space of \(\beta _{2}\) is fixed to \(\in [0.1, 0.1, \ldots , 0.5]\) for all datasets.
As bitcoin transactions are made among randomly chosen accounts, the repetition of (partial) group interactions is rarely observed. Due to this intrinsic characteristic of the datasets, we use the degrees of individual nodes instead of the degrees of groups when ReDi and the baseline model are given the statistics from bitcoin datasets. The same strategy is also used for the baseline model when the input statistics are from the q &a server dataset. Without the strategy, the baseline model takes more than 12 hours.
Note that \((1+\frac{1}{x})^{x}\) is a well-known increasing function whose limit as \(x\rightarrow \infty\) is e. Thus, \(\log (1+\frac{1}{x})^{x}=x\log (1+\frac{1}{x})\) is also an increasing function, and since \(x'=1/x\) is decreasing at \(x>0\), \(x'\log (1+\frac{1}{x'})=\frac{1}{x}\log (1+x)\) is decreasing at \(x>0\).
We use the venues listed at Wikipedia (2022).
References
Akoglu L, Vaz de Melo PO, Faloutsos C (2012) Quantifying reciprocity in large weighted communication networks. In: Advances in knowledge discovery and data mining: 16th Pacific-Asia conference, PAKDD 2012, Kuala Lumpur, Malaysia, May 29–June 1, 2012, Proceedings, Part II 16. Springer, pp 85–96. https://doi.org/10.1007/978-3-642-30220-6_8
Albert R, Barabási AL (2002) Statistical mechanics of complex networks. Rev Mod Phys 74(1):47. https://doi.org/10.1103/RevModPhys.74.47
Archive I (2022) Stack exchange data dump question answering dataset of stack exchange inc. https://archive.org/details/stackexchange
Benson AR, Abebe R, Schaub MT et al (2018a) Simplicial closure and higher-order link prediction. Proc Natl Acad Sci 115(48):E11,221-E11,230. https://doi.org/10.1073/pnas.1800683115
Benson AR, Kumar R, Tomkins A (2018b) Sequences of sets. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pp 1148–1157. https://doi.org/10.1145/3219819.3220100
Bu F, Lee G, Shin K (2023) Hypercore decomposition for non-fragile hyperedges: concepts, algorithms, observations, and applications. arXiv preprint arXiv:2301.08440. https://doi.org/10.48550/arXiv.2301.08440
Chodrow P, Mellor A (2020) Annotated hypergraphs: models and applications. Appl Netw Sci 5:1–25. https://doi.org/10.1007/s41109-020-0252-y
Choo H, Shin K (2022) On the persistence of higher-order interactions in real-world hypergraphs. In: Proceedings of the 2022 SIAM international conference on data mining (SDM). SIAM, pp 163–171. https://doi.org/10.1137/1.9781611977172.19
Cirkovic D, Wang T, Resnick S (2022) Preferential attachment with reciprocity: properties and estimation. arXiv preprint arXiv:2201.03769. https://doi.org/10.48550/arXiv.2201.03769
Comrie C, Kleinberg J (2021) Hypergraph ego-networks and their temporal evolution. In: 2021 IEEE international conference on data mining (ICDM). IEEE, pp 91–100. https://doi.org/10.1109/ICDM51629.2021.00019
Do MT, Yoon Se, Hooi B et al (2020) Structural patterns and generative models of real-world hypergraphs. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, pp 176–186. https://doi.org/10.1145/3394486.3403060
Dong Y, Sawin W, Bengio Y (2020) Hnhn: hypergraph networks with hyperedge neurons. arXiv preprint arXiv:2006.12278. https://doi.org/10.48550/arXiv.2006.12278
Garlaschelli D, Loffredo MI (2004) Fitness-dependent topological properties of the world trade web. Phys Rev Lett 93(18):188,701. https://doi.org/10.1103/PhysRevLett.93.188701
Hidalgo CA, Rodríguez-Sickert C (2008) The dynamics of a mobile phone network. Phys A Stat Mech Appl 387(12):3017–3024. https://doi.org/10.1016/j.physa.2008.01.073
Kim S, Choe M, Yoo J et al (2022) Reciprocity in directed hypergraphs: measures, findings, and generators. In: The 22nd IEEE international conference on data mining, ICDM 2022. IEEE Computer Society. https://doi.org/10.1109/ICDM54844.2022.00122
Kook Y, Ko J, Shin K (2020) Evolution of real-world hypergraphs: patterns and models without oracles. In: 2020 IEEE international conference on data mining (ICDM). IEEE, pp 272–281. https://doi.org/10.1109/ICDM50108.2020.00036
Lee G, Shin K (2021) Thyme+: temporal hypergraph motifs and fast algorithms for exact counting. In: 2021 IEEE international conference on data mining (ICDM). IEEE, pp 310–319. https://doi.org/10.1109/ICDM51629.2021.00042
Lee G, Ko J, Shin K (2020) Hypergraph motifs: concepts, algorithms, and discoveries. PVLDB 13(12):2256–2269. https://doi.org/10.14778/3407790.3407823
Lee G, Choe M, Shin K (2021) How do hyperedges overlap in real-world hypergraphs? Patterns, measures, and generators. Proc Web Conf 2021:3396–3407. https://doi.org/10.1145/3442381.3450010
Leskovec J (2008) Dynamics of large networks. Carnegie Mellon University, Pittsburgh
Leskovec J, Krevl A (2014) SNAP datasets: Stanford large network dataset collection. http://snap.stanford.edu/data
Lin J (1991) Divergence measures based on the Shannon entropy. IEEE Trans Inf Theory 37(1):145–151. https://doi.org/10.1109/18.61115
Luo X, Peng J, Liang J (2022) Directed hypergraph attention network for traffic forecasting. IET Intell Transport Syst 16(1):85–98. https://doi.org/10.1049/itr2.12130
Newman ME, Forrest S, Balthrop J (2002) Email networks and the spread of computer viruses. Phys Rev E 66(3):035,101. https://doi.org/10.1103/PhysRevE.66.035101
Nguyen VA, Lim EP, Tan HH et al (2010) Do you trust to get trust? A study of trust reciprocity behaviors and reciprocal trust prediction. In: Proceedings of the 2010 SIAM international conference on data mining. SIAM, pp 72–83. https://doi.org/10.1137/1.9781611972801.7
Pearcy N, Crofts JJ, Chuzhanova N (2014) Hypergraph models of metabolism. Int J Biol Vet Agric Food Eng 8(8):752–756. https://doi.org/10.5281/zenodo.1094247
Ranshous S, Joslyn CA, Kreyling S et al (2017) Exchange pattern mining in the bitcoin transaction directed hypergraph. In: Financial cryptography and data security: FC 2017 international workshops, WAHC, BITCOIN, VOTING, WTSC, and TA, Sliema, Malta, April 7, 2017, Revised Selected Papers 21. Springer, pp 248–263. https://doi.org/10.1007/978-3-319-70278-0_16
Savitzky A, Golay MJ (1964) Smoothing and differentiation of data by simplified least squares procedures. Anal Chem 36(8):1627–1639. https://doi.org/10.1021/ac60214a047
Sinha A, Shen Z, Song Y et al (2015) An overview of microsoft academic service (mas) and applications. In: Proceedings of the 24th international conference on world wide web, pp 243–246. https://doi.org/10.1145/2740908.2742839
Squartini T, Picciolo F, Ruzzenenti F et al (2013) Reciprocity of weighted networks. Sci Rep 3(1):1–9. https://doi.org/10.1038/srep02729
Wang T, Resnick SI (2022) Asymptotic dependence of in-and out-degrees in a preferential attachment model with reciprocity. Extremes. https://doi.org/10.1007/s10687-022-00439-5
Wikipedia (2022) Computer science conference Wikipedia. https://en.wikipedia.org/wiki/List_of_computer_science_conferences
Wu J, Liu J, Chen W et al (2021) Detecting mixing services via mining bitcoin transaction network with hybrid motifs. IEEE Trans Syst Man Cybern Syst 52(4):2237–2249. https://doi.org/10.1109/TSMC.2021.3049278
Yadati N, Nitin V, Nimishakavi M et al (2020) Nhp: neural hypergraph link prediction. In: Proceedings of the 29th ACM international conference on information & knowledge management, pp 1705–1714. https://doi.org/10.1145/3340531.3411870
Yadati N, Gao T, Asoodeh S et al (2021) Graph neural networks for soft semi-supervised learning on hypergraphs. In: Advances in knowledge discovery and data mining: 25th Pacific-Asia conference, PAKDD 2021, Virtual Event, May 11–14, 2021, Proceedings, Part I. Springer, pp 447–458. https://doi.org/10.1007/978-3-030-75762-5_36
Yoon S, Song H, Shin K et al (2020) How much and when do we need higher-order information in hypergraphs? A case study on hyperedge prediction. Proc Web Conf 2020:2627–2633. https://doi.org/10.1145/3366423.3380016
Funding
This work was supported by National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2020R1C1C1008296) and Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-00871, Development of AI Autonomy and Knowledge Enhancement for AI Agent Collaboration) (No. 2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose.
Additional information
Responsible editor: Tim Weninger.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1: Proof of Theorems
In this section, we provide proofs of theorems in the main paper. We first introduce some preliminaries of proofs and then prove why HyperRec, the proposed family of reciprocity measures, satisfies all of Axioms 1–8. Then, we prove the exactness of FastHyperRec and the related complexity reduction techniques.
1.1 Preliminaries of Proofs
In this subsection, we first give the general form of our proposed measure. Then, we introduce several important characteristics of Jensen–Shannon Divergence (JSD) (Lin 1991), which plays a key role in our proofs. After that, we examine how these concepts are applied to our measure. Basic symbols used for hypergraphs and arcs are defined in Sect. 2.1.
HyperRec, the proposed reciprocity measure for an arc \(e_{i}\) and for a hypergraph G is defined as
where \({\mathcal{L}}(p_{h}, p^{*}_{h})\) denotes Jensen-Shannon Divergence (Lin 1991) between a transition probability distribution \(p_{h}\) and the optimal transition probability distribution \(p^{*}_{h}\).
For a target arc \(e_{j}\) with an arbitrary non-empty reciprocal set \(R_{j}\), the transition probability is defined as
where \({\textbf{1}}[\text{TRUE}]=1\), and \({\textbf{1}}[\text{FALSE}]=0\).
In Lemma 1, we provide several theoretical properties of JSD that are used for our proofs. Note that a general form of \(\textrm{JSD} ({P}\Vert {Q})\) is defined as
where
Lemma 1
(Basic Properties of Jensen-Shannon Divergence). The Jensen-Shannon Divergence (JSD) has the following properties:
-
A-I. For any two discrete probability distributions P and Q, \(0 \le \textrm{JSD} ({P}\Vert {Q}) \le \log 2\) holds.
-
A-II. For two discrete probability distributions P and Q where their non-zero-probability domains do not overlap (i.e., \(p_{i}q_{i} = 0, \ \forall i = \{1, \ldots ,|V \vert \}\)), \(\textrm{JSD} ({P}\Vert {Q})\) is maximized, and the maximum value is \(\log {2}\).
-
A-III. Consider two discrete probability distributions P and Q. If there exists a value where both P and Q have non-zero probability, \(\textrm{JSD} ({P}\Vert {Q}) < \log {2}\) holds.
Proof
-
(Proof of A-I) Refer to Lin (1991) for a proof of A-I.
-
(Proof of A-II) Let \({\mathcal{X}}_{p}\) be the domain where P has non-zero probability, and let \({\mathcal{X}}_{q}\) be the domain where Q has non-zero probability. Since \({\mathcal{X}}_{p}\) and \({\mathcal{X}}_{q}\) do not overlap, Eq. (14) is rewritten as
$$\begin{aligned} {\mathcal{L}}(P,Q) = \sum _{i\in {\mathcal{X}}_{p}}\frac{p_{i}}{2}\log 2 + \sum _{i\in {\mathcal{X}}_{q}}\frac{q_{i}}{2}\log 2 = \frac{\log 2}{2}\left( \sum _{i \in {\mathcal{X}}_{p}}p_{i} + \sum _{i \in {\mathcal{X}}_{q}}q_{i} \right) = \log 2. \end{aligned}$$ -
(Proof of A-III) Let k be a point where \(p_{k}q_{k} \ne 0\) holds. Then, Eq. (14) is rewritten as
$$\begin{aligned} {\mathcal{L}}(P,Q) \ \le \sum _{i\in {\mathcal{X}}_{p}\setminus k}\frac{p_{i}}{2}\log 2 + \sum _{i\in {\mathcal{X}}_{q}\setminus k}\frac{q_{i}}{2}\log 2 + \left( {\frac{p_k}{2} \log {\frac{2p_k}{p_k + q_k}} + \frac{q_k}{2} \log {\frac{2q_k}{p_k + q_k}}}\right) . \end{aligned}$$(16)The below inequality implies that Eq. (16) is smaller than \(\log 2\).
$$\begin{aligned}{} & {} \left( {\frac{p_k}{2} \log {2} + \frac{q_k}{2} \log {2}}\right) - \left( {\frac{p_k}{2} \log {\frac{2p_k}{p_k + q_k}} + \frac{q_k}{2} \log {\frac{2q_k}{p_k + q_k}}}\right)> 0\\{} & {} \quad \equiv {\frac{p_k}{2} \log {\left( 1 + \frac{q_k}{p_k}\right) } + \frac{q_k}{2} \log {\left( 1 + \frac{p_k}{q_k}\right) }}> 0 \quad (\because p_{k},q_{k} > 0). \end{aligned}$$Since \(\log (x)>0\) holds for any \(x>1\), the last inequality holds. Thus, we can conclude that \(\textrm{JSD} ({P}\Vert {Q}) < \log 2\) holds in this case.
\(\square\)
In Lemma 2, we provide several basic properties of HyperRec, our proposed measure of reciprocity in hypergraphs.
Lemma 2
(Basic Properties of HyperRec). HyperRec (i.e., defining \(r(e_i, R_i)\) as in Eq. (5)) has the following properties:
-
A-IV. If a target arc’s head set and the tail sets of its reciprocal arcs do not overlap, the target arc’s reciprocity becomes zero. Formally,
$$\begin{aligned} \text{If } H_{i} \cap \bigcup _{e_{k} \in R_{i}} T_{k} = \emptyset \quad \text{ then } \quad r(e_{i}, R_{i}) = 0. \end{aligned}$$ -
A-V. If a target arc’s tail set and the head sets of its reciprocal arcs do not overlap, the target arc’s reciprocity becomes zero. Formally,
$$\begin{aligned} \text{If } T_{i} \cap \bigcup _{e_{k} \in R_{i}} H_{k} = \emptyset \quad \text{ then } \quad r(e_{i}, R_{i}) = 0. \end{aligned}$$ -
A-VI. If (a) a target arc’s head set and the tail sets of its reciprocal arcs overlap and (b) the target arc’s tail set and the head sets of its reciprocal arcs overlap, then the target arc’s reciprocity is greater than zero. Formally,
$$\begin{aligned} \text{If } \sum _{e_{k} \in R_{i}}|H_{i} \cap T_{k}\vert \cdot |T_i \cap H_{k} \vert \ge 1 \quad \text{ then } \quad r(e_{i}, R_{i}) > 0. \end{aligned}$$
Proof
Below, we use \({\mathcal{L}}_{max}\) to denote the maximum value of JSD, which is \(\log 2\).
-
(Proof of A-IV) For this case, as mentioned in Sect. 3.2, the probability mass is non-zero only at \(v_{sink}\). On the other hand, the optimal transition probability \(p^{*}\) is non-zero only at each \(v \in T_{i}\). Since \(v_{sink} \not \in T_{i}\), the non-zero-probability domains of the transition probability and the optimal transition probability do not overlap, and by A-II, the probabilistic distance between them is maximized. This happens for all \(v \in H_{i}\). Therefore,
$$\begin{aligned} r(e_i , R_i)&= \left( \frac{1}{|R_{i} \vert } \right) ^\alpha \left( 1 - \frac{\sum _{v_h \in H_i} {\mathcal{L}}_{max}}{|H_i \vert \cdot {\mathcal{L}}_{max} }\right) \\&= \left( \frac{1}{|R_{i} \vert } \right) ^\alpha \left( 1 - \frac{|H_{i} \vert \cdot {\mathcal{L}}_{max}}{|H_i \vert \cdot {\mathcal{L}}_{max} }\right) = \left( \frac{1}{|R_{i} \vert } \right) ^\alpha \times 0 = 0. \end{aligned}$$ -
(Proof of A-V) As in A-IV, the non-zero probability domains of the transition probability and the optimal transition probability do not overlap since \(T_{i} \cap \bigcup _{e_{k} \in R_{i}} H_{k} = \emptyset\). Again, the probabilistic distance is maximized. This happens for all \(H_{i}\), i.e.,
$$\begin{aligned} r(e_i , R_i)&= \left( \frac{1}{|R_{i} \vert } \right) ^\alpha \left( 1 - \frac{\sum _{v_h \in H_i} {\mathcal{L}}_{max}}{|H_i \vert \cdot {\mathcal{L}}_{max} }\right) \\&= \left( \frac{1}{|R_{i} \vert } \right) ^\alpha \left( 1 - \frac{|H_{i} \vert \cdot {\mathcal{L}}_{max}}{|H_i \vert \cdot {\mathcal{L}}_{max} }\right) = 0 \end{aligned}$$ -
(Proof of A-VI) According to the statement, there exists at least one reciprocal arc \(e_{k}\) whose (a) tail set overlaps with the target arc’s head set (i.e., \(|T_{k} \cap H_{i} \vert \ge 1\)) and (b) head set overlaps with the target arc’s tail set (i.e., \(|H_{k} \cap T_{i} \vert \ge 1\)). Thus, for \(v_{h} \in H_{i} \cap T_{k}\), \(p_{h}\) and \(p^{*}_{h}\) share non-zero probability domains, which implies \({\mathcal{L}}(p_{h}, p^{*}_{h}) < \log {2}\) by A-III. Hence, we can derive the following inequality:
$$\begin{aligned} r(e_i , R_i)&= \left( \frac{1}{|R_{i} \vert } \right) ^\alpha \left( 1 - \frac{\sum _{v_h \in H_i} {\mathcal{L}}(p_{h} , p^{*}_{h})}{|H_i \vert \cdot {\mathcal{L}}_{max} }\right) \\&> \left( \frac{1}{|R_{i} \vert } \right) ^\alpha \left( 1 - \frac{|H_{i} \vert \cdot {\mathcal{L}}_{max}}{|H_i \vert \cdot {\mathcal{L}}_{max} }\right) = 0. \end{aligned}$$
\(\square\)
1.2 Proof of Theorem 1
In this section, we show that the proposed measure HyperRec satisfies all the generalized axioms. For a proof of Generalized Axiom 1, we simply show that the former’s reciprocity gets zero (i.e., \(r(e_{i}, R_{i}) = 0\)), while the latter’s reciprocity gets a positive value (i.e., \(r(e_{j}, R_{j}) > 0\)). For proofs of Generalized Axioms 2–4, we first show how the formal statement of each axiom can be written in terms of the probabilistic distance. Then, we derive a less reciprocal case has a higher probabilistic distance between the transition probability and the optimal transition probability for every head set node of a target arc.
1.2.1 Proof of the Fact that HyperRec Satisfies Axiom 1
Through an example, Axiom 1 states that an arc that has at least one inversely-overlapping arc should be more reciprocal than an arc without any inversely-overlapping arc. The generalized statement of Axiom 1 is formalized in Generalized Axiom 1.
Proposition 1.
HyperRec (i.e., defining \(r(e_i, R_i)\) as in Eq. (5)) satisfies Generalized Axiom 1.
Proof
We first show that \(r(e_{i}, R_{i}) = 0\). The suggested condition (\(\forall e'_{i} \in R_{i}: \min (\vert H_{i} \cap T'_{i}\vert , \vert T_{i} \cap H'_{i}\vert ) = 0\)) implies that every reciprocal arc \(e'_{i} \in R_{i}\) lies in one of the following cases (for simplicity, we refer to \(e'_{i} \in R_{i}\) as a non-influential arc if \(e'_{i}\) does not contribute to making the non-zero probability domains of the transition probability that overlap with that of the optimal transition probability): (1) When \((H_{i} \cap T'_{i} = \emptyset ) \wedge (T_{i} \cap H'_{i} \ne \emptyset )\) holds, as shown in the proof of A-IV, \(e'_{i}\) is non-influential. (2) When \((H_{i} \cap T'_{i} \ne \emptyset ) \wedge (T_{i} \cap H'_{i} = \emptyset )\) holds, as shown in the proof of A-V, \(e'_{i}\) is non-influential. (3) When \((H_{i} \cap T'_{i} = \emptyset ) \wedge (T_{i} \cap H'_{i} = \emptyset )\) holds, as shown in the proof of A-II and A-V, \(e'_{i}\) is non-influential. That is, all reciprocal hyperarcs in \(R_{i}\) are non-influential, and by A-II, the distance between the transition probability and the optimal transition probability is maximized. This happens for all \(v \in H_{i}\). Therefore,
We now show that \(r(e_{j}, R_{j}) > 0\) holds. The suggested condition (\(\exists e'_{i} \in R_{i}: \min (\vert H_{i} \cap T'_{i}\vert , \vert T_{i} \cap H'_{i}\vert ) \ge 1\)) is equivalent to the condition of A-VI. Thus, by A-VI, the inequality \(r(e_{j}, R_{j}) > 0\) holds. Since \(r(e_{i}, R_{i}) = 0\) and \(r(e_{j}, R_{j}) > 0\), the following inequality holds: \(r(e_{i}, R_{i}) < r(e_{j}, R_{j})\). \(\square\)
1.2.2 Proof of the Fact that HyperRec Satisfies Axiom 2
Through an example, Axiom 2 states that an arc that inversely overlaps with reciprocal arcs to a greater extent should be more reciprocal. In addition, Axiom 2 is divided into two cases, which are formalized in Generalized Axiom 2A and 2B respectively.
Proposition 2A.
HyperRec (i.e., defining \(r(e_i, R_i)\) as in Eq. (5)) satisfies Generalized Axiom 2A.
Proof
Since \(|R_{i} \vert = |R_{j} \vert = 1\), the cardinality penalty terms on both sides can be discarded, i.e.,
Since \(|H_{i} \vert = |H_{j} \vert\), the main inequality is rewritten as
Each head set can be divided into two parts: \(H_{k} {\setminus } T'_{k} \ \text{and} \ H_{k} \cap T'_{k}, \ \forall k = i, j\). For \(H_{k} {\setminus } T'_{k}\), as described in A-IV, the probabilistic distance is maximized to \(\mathcal{L}_{max} = \log {2}\). For \(H_{k} \cap T'_{k}\), by using the fact that \(T_{k} \cap H'_{k} \ne \emptyset\), \(\forall k = i,j\), we can derive \(\mathcal{L}(p_{h}, p^{*}_{h}) < \log {2}\) holds, \(\forall v_{h} \in H_{k} \cap T'_{k}\), \(\forall k=i,j\) by A-VI. One more notable fact is that, since there is a single reciprocal arc for the target arc, \(\mathcal{L}(p_{h}, p^{*}_{h})\) is the same for every \(v_{h} \in H_{k} \cap T'_{k}\). Here, let \(\bar{p}_{k}, \forall k = i,j\) be the transition probability distribution regarding the target arc \(e_{k}\) and its reciprocal set \(R_{k}\). We rewrite the inequality (17)
Below, we show that this inequality holds for Case (i) and then Case (ii).
\({{{\textbf{Case}}}\,(i)}:\) For Case (i), we first show that the inequality (17) is equivalent to \(\mathcal{L}(\bar{p}_{i}, \bar{p}^{*}_{i}) > \mathcal{L}(\bar{p}_{j}, \bar{p}^{*}_{j})\). The intersection of the target arc’s head set and the reciprocal arc’s tail set is larger for \(e_{j}\) than for \(e_{i}\). Thus, the following inequality hold:
Therefore, Eq. (17) is implied by
Now, we show that \(\mathcal{L}(\bar{p}_{i}, \bar{p}^{*}_{i}) > \mathcal{L}(\bar{p}_{j}, \bar{p}^{*}_{j})\) holds. To this end, denote the size of the intersection regions as \(F_{1} = |T_{i} \cap H'_{i} \vert \ < \ F_{2} = |T_{j} \cap H'_{j} \vert\). We can decompose the domain of \(v \in V\) into four parts as
For the last part, both the transition probability and the optimal transition probability of it have zero mass, i.e., \(p_{h}(v) = p^{*}_{h}(v) = 0\), which results in no penalty. We only need to consider the first three parts for comparison. Here, the probabilistic distance can be explicitly written as
where \(\ell\) denotes a single-element comparison of \(\textrm{JSD} ({P}\Vert {Q})\) in Eq. (15). Let \(A = |H'_{i} \vert = |H'_{j} \vert\) and \(T = |T_{i} \vert = |T_{j} \vert\). Then, we can rewrite \(\mathcal{L}(\bar{p}_{i}, \bar{p}^{*}_{i}) - \mathcal{L}(\bar{p}_{j}, \bar{p}^{*}_{j}) > 0\) as
The inequality holds since \(\log (x)>0\) holds for any \(x>1\). Hence, we show \(\mathcal{L}(\bar{p}_{i}, \bar{p}^{*}_{i}) > \mathcal{L}(\bar{p}_{j}, \bar{p}^{*}_{j})\) and thus the inequality (17) hold for Case (i).
\({{{\textbf{Case}}}\,(ii)}:\) For Case (ii), we first show that the inequality (17) is equivalent to \(\mathcal{L}(\bar{p}_{i}, \bar{p}^{*}_{i}) \ge \mathcal{L}(\bar{p}_{j}, \bar{p}^{*}_{j})\). The inequality can be rewritten as
By the condition of the axiom, the intersection of the target arc’s head set and the reciprocal arc’s tail set is larger than for \(e_{j}\) than for \(e_{i}\), and thus \(\frac{|H_{j} \cap T'_{j}\vert }{|H_{i} \cap T'_{i} \vert } > 1\) holds. Thus, Eq. (17) is implied by \(\frac{\log 2 - \mathcal{L}(\bar{p}_{i}, \bar{p}^{*}_{i})}{\log 2 - \mathcal{L}(\bar{p}_{j}, \bar{p}^{*}_{j})} \le 1\), which is equivalent to \(\mathcal{L}(\bar{p}_{i}, \bar{p}^{*}_{i}) \ge \mathcal{L}(\bar{p}_{j}, \bar{p}^{*}_{j})\), holds. Now we show that \(\mathcal{L}(\bar{p}_{i}, \bar{p}^{*}_{i}) \ge \mathcal{L}(\bar{p}_{j}, \bar{p}^{*}_{j})\) holds, which is rewritten as
Note that, unlike Case (i), where \(F_{1} < F_{2}\) holds, \(F_{1} \le F_{2}\) holds for Case (ii). If \(F_{2} = F_{1}\), then ths LHS above becomes 0, and thus above inequality holds. If \(F_{2} > F_{1}\),
The inequality holds since \(\log (x)>0\) holds for any \(x>1\). Hence, we show \(\mathcal{L}(\bar{p}_{i}, \bar{p}^{*}_{i}) \ge \mathcal{L}(\bar{p}_{j}, \bar{p}^{*}_{j})\) and thus the inequality (17) hold for Case (ii). \(\square\)
Proposition 2B.
HyperRec (i.e., defining \(r(e_i, R_i)\) as in Eq. (5)) satisfies Generalized Axiom 2B.
Proof
Since \(|R_{i} \vert = |R_{j} \vert = 1\), the cardinality penalty terms can be ignored. The overall inequality is re-written as
As in the previous proof, \(|H_{i} \vert = |H_{j} \vert\), and \({\mathcal{L}}(p_{h}, p^{*}_{h})\) is identical for every \(v_{h} \in H_{k} \cap T'_{k}\). Let \({\bar{p}}_{k}, \forall k = i,j\) be the transition probability distribution regarding the target arc \(e_{k}\) and its reciprocal set \(R_{k}\). Here, \(|H_{i} \cap T'_{i} \vert = |H_{j} \cap T'_{j} \vert\), \(|T'_{i} \vert = |T'_{j} \vert\), and the number of target arc’s head set nodes \(v_{h}\) that satisfy \({\mathcal{L}}(p_{h}, p^{*}_{h}) < \log {2}\) is identical for both cases. Thus, the above inequality is re-written as
Now, we only need to show that the probabilistic distance between transition probability and the optimal transition probability is greater in \(e_{i}\) than in \(e_{j}\). Let \(A = |H'_{i} \vert \ > \ B = |H'_{j} \vert\). We can decompose the domain of \(v \in V\) into four parts as
Here, \(\textrm{JSD} ({P}\Vert {Q})\) in the second and fourth parts is identical for both cases. That is, we only need to compare the probabilistic distances that are related to the first and third parts of the above four domains. That is,
where \(F = |H'_{i} \cap T_{i} \vert = |H'_{j} \cap T_{j} \vert\) and \(T = |T_{i} \vert =|T_{j} \vert\). Note that \(A > B\). Overall inequality is rewritten as
To simplify the equation, we unfold \(\ell (p,q)\) as
We show that the last inequality holds by splitting it into two parts: \(\log {\frac{AB + BT}{AB + AT}} < 0\) and \(\frac{T}{A} \log {(1 + \frac{A}{T})} - \frac{T}{B} \log {( 1 +\frac{B}{T})} < 0\). The first part is trivial since \(B < A\) implies
In the second part, each term is in the form of \(f(x) = \frac{1}{x}\log {(1 + x)}\). Since f(x) is decreasing at \(x>0\),Footnote 6\(A > B\) implies
\(\square\)
1.2.3 Proof of the Fact that HyperRec Satisfies Axiom 3
Through an example, Axiom 3 states that when two arcs inversely overlap equally with their reciprocal sets, an arc with a single reciprocal arc is more reciprocal than one with multiple reciprocal arcs. Axiom 3 is split into two cases and each case is formalized in Generalized Axioms 3A and 3B respectively, where an arc with a single reciprocal arc and an arc with exactly two reciprocal arcs are compared. In Remark 1, we further generalize them to encompass a comparison of the former and an arc with two or more reciprocal arcs and provide a proof sketch to show that these extended statements hold true for our proposed measure.
Proposition 3A.
HyperRec (i.e., defining \(r(e_i, R_i)\) as in Eq. (5)) satisfies Generalized Axiom 3A.
Proof
Since the sizes of the reciprocal sets differ, the cardinality penalty term should be considered. Here, \(r(e_{i}, R_{i})\) and \(r(e_{j}, R_{j})\) is rewritten as
Since \(\alpha > 0\), \(r(e_{i}, R_{i}) < r(e_{j}, R_{j})\) is implied by
Since \(|H_{i} \vert = |H_{j} \vert\), the inequality is rewritten as
For the target arc \(e_{i}\), since \(T'_{i1} = T'_{i2}\), \(p_{h}\) for every \(v_{h} \in H_{i}\) has the same distribution. For the target arc \(e_{j}\), since there is only one single reciprocal arc, \(p_{h}\) for every \(v_{h} \in H_{i}\) has the same distribution. Let \({\bar{p}}_{k}, \forall k = i,j\) be the transition probability distribution regarding the target arc \(e_{k}\) and its reciprocal set \(R_{k}\). Here, the inequality (18) is rewritten as
Since \(e'_{i1} \subseteq _{(R)} e_{i}, \ e'_{i2} \subseteq _{(R)} e_{i}, \ e'_{j} \subseteq _{(R)} e_{j}\), and \(|T'_{j} \vert = |T'_{i1} \vert\), the last inequality is rewritten as
The proof can be done by showing the last inequality, \({\mathcal{L}}({\bar{p}}_{j}, {\bar{p}}^{*}_{j}) \le {\mathcal{L}}({\bar{p}}_{i}, {\bar{p}}^{*}_{i})\).
In order to show \({\mathcal{L}}({\bar{p}}_{j}, {\bar{p}}^{*}_{j}) \le {\mathcal{L}}({\bar{p}}_{i}, {\bar{p}}^{*}_{i})\), we should take a close look at the transition probability in \(e_{i}\). Since the head sets of the two reciprocal arcs do not overlap, the transition probability is
Since \(e'_{i1} \subseteq _{(R)} e_{i}, \ e'_{i2} \subseteq _{(R)} e_{i}\), and \(e'_{j} \subseteq _{(R)} e_{j}\), the domain of \(v\in V\) can be divided into
Since \(|T_{i} \vert = |T_{j} \vert\), the probability mass for the last part is identical for both cases. Let \(A = |H_{j} \vert\), \(B = |H_{i1} \vert\), and \(T = |T_{i} \vert = |T_{j} \vert\). Since \(H'_{i1}, H'_{i2} \subseteq T_{i}\), \(H'_{j} \subseteq T_{j}\), \(H'_{i1} \cap H'_{i2} = \emptyset\), and \(|(H'_{i1} \cup H'_{i2}) \cap T_{i} \vert = |H'_{j}\cap T_{j} \vert\), \(|H_{i1} \vert + |H_{i2} \vert = |H_{j} \vert\) holds. Then, based on the above fact, we rewrite \({\mathcal{L}}({\bar{p}}_{j}, {\bar{p}}_{j}^{*}) \le {\mathcal{L}}({\bar{p}}_{i}, {\bar{p}}_{i}^{*})\) as
where for the last equivalence, we subtract \(\left( \frac{A}{2T} + \frac{1}{2} \right) \log {2} = \left( \frac{A}{2T} + \frac{A}{2A} \right) \log {2} = \left( \frac{B}{2T} + \frac{B}{4B} + \frac{A-B}{2T} + \frac{B-A}{4(B-A)}\right) \log {2}\) from both sides. We show the last inequality by dividing it into two and proving each. If the following two inequality holds, the proof is done.
We first show the inequality (20). By multiplying by 2T both sides, we get
Here, we prove this inequality by using the functional form of \(f(B) = B\log {\frac{2B}{2B + T}} + (A-B)\log {\frac{2(A-B)}{2(A-B) + T}}\) where \(0< B < A\). Its derivative is
Thus, \(\frac{\partial f(B)}{\partial B} = \log {x} - x - (\log {y} - y)\) for \(x = \frac{2B}{2B+T}\) and \(y = \frac{2(A-B)}{2(A-B) + T}\), which satisfy \(0<x,y<1\), and it has the following properties:
-
If we plug in \(B = \frac{A}{2}\), \(f'(B) = 0\) holds,
-
\(\log {x} - x\) is an increasing function at \(0< x < 1\),
-
If \(0< B < \frac{A}{2}\), then \((\log {y}) - y > (\log {x}) - x\), which implies \(f'(B) < 0\),
-
If \(\frac{A}{2}< B < A\), then \((\log {x}) - x > (\log {y}) - y\), which implies \(f'(B) > 0\).
From these properties, we can derive
Hence, when \(0<B<A\), f(B) has its minimum value at \(B=A/2\), and therefore the inequality (22), which is equivalent to the inequality (20), holds. Now we show the inequality (21), which is rewritten as
The last inequality trivially holds. \(\square\)
Proposition 3B.
HyperRec (i.e., defining \(r(e_i, R_i)\) as in Eq. (5)) satisfies Generalized Axiom 3B.
Proof
By following the proof of Proposition 3A, we can show that \(r(e_i \, R_i) < r(e_j \, R_j)\) is implied by
Since \(T'_{i1} \cap T'_{i2} = \emptyset\) and \(H'_{i1} = H'_{i2}\), the transition probability for every \(v_{h} \in H_{i} \cap \{T'_{i1} \cup T'_{i2}\}\) is identical for \(e_{i}\). This is also true for \(e_{j}\), whose reciprocal set has only one arc. Let \({\bar{p}}_{k}, \forall k = i,j\) be the transition probability distribution regarding the target arc \(e_{k}\) and its reciprocal set \(R_{k}\). Since \(|H_{i} \cap \{T'_{i1} \cup T'_{i2}\} \vert = |H_{j} \cap T'_{j} \vert\), the above inequality is rewritten as
Since \(e'_{i1} \subseteq _{(R)} e_{i}\), \(e'_{i2} \subseteq _{(R)} e_{i}\), \(e'_{j} \subseteq _{(R)} e_{j}\), and \(|H'_{i1} \vert = |H'_{j} \vert\), the probabilistic distances for \(e_{i}\) and \(e_{j}\) are identical. That is,
and thus \({\mathcal{L}}({\bar{p}}_{i}, {\bar{p}}_{i}^{*}) \le {\mathcal{L}}({\bar{p}}_{j}, {\bar{p}}_{j}^{*})\) also holds. \(\square\)
Remark 1
(Extension of Propositions 3A and 3B to Multiple Hyperarc Cases). Although the statement of Proposition 3A presents a case with a single arc in \(R_{i}\) and two arcs in \(R_{j}\), it can be further generalized: a single arc in \(R_{i}\) and \(K \ge 2\) hyperarcs in \(R_{j}\) under the following conditions, which are equivalent to the current conditions for \(K=2\): (1) the head sets of the arcs in \(R_{i}\) are disjoint and their tail sets are identical, (2) all arcs in \(R_{i}\) satisfy the condition of \(\subseteq _{(R)}\), and (3) the coverage of \(T_{i}\) by the head sets of the arcs in \(R_{i}\) is of the same size as the coverage of \(T_{j}\) by the head set of the arc in \(R_{j}\).
We provide a proof sketch to demonstrate the validity of HyperRec in this generalized setting. As in the proof of Proposition 3A, it suffices to show that Eq. (24) holds, which generalizes Eq. (19),
where \(A = \sum _{i=1}^{K}B_{i}\). Considering that the Jensen-Shannon Divergence (JSD) is an average of two KL-divergence terms, which is a well-known convex function when one probability distribution is fixed (1/T in our case), we can derive that each term \(\ell\) in Eq. (24) is also a convex function.
From this fact, we can apply Jensen’s Inequality, which states: \(f(a_{1}x_{1} + \cdots + a_{K}x_{K}) \le a_{1}f(x_{1}) + \cdots + a_{K}f(x_{K})\) holds for a convex function f and non-negative coefficients \(a_{1},\ldots , a_{K}\) where \(\sum _{i=1}^{K} a_{i} = 1\). By considering the fact that \(\ell (\frac{1}{T}, x)\) is a convex function with respect to x, we derive Eq. (25) by setting \(a_{i} = B_{i}/A\) and \(x_{i} = 1/(KB_{i})\).
Multiplying both sides of Eq. (25) by A implies Eq. (24), which is the result we aim to show.
Similarly, we can further generalize Proposition 3B to case with a single arc in \(R_{i}\) and \(K \ge 2\) arcs in \(R_{j}\) under the following conditions, which are equivalent to the current conditions when \(K=2\): (1) the tail sets of the arcs in \(R_{i}\) are disjoint and their head sets are identical, (2) all arcs in \(R_{i}\) satisfy the condition of \(\subseteq _{(R)}\), and (3) the coverage of \(H_{i}\) by the tail sets of the arcs in \(R_{i}\) is of the same size as the coverage of \(H_{j}\) by the tail set of the arc in \(R_{j}\). The proof provided for Proposition 3B can be directly applied in this generalized setting to show the validity of HyperRec.
1.2.4 Proof of the Fact that HyperRec Satisfies Axiom 4
Through an example, Axiom 4 states that an arc whose reciprocal arcs are equally reciprocal to all nodes in the arc is more reciprocal than one with reciprocal arcs biased towards a subset of nodes in the arc. The generalized statement of Axiom 4 is formalized in Generalized Axiom 4.
Proposition 4.
HyperRec (i.e., defining \(r(e_i, R_i)\) as in Eq. (5)) satisfies Generalized Axiom 4.
Proof
By the definition, the inequality is rewritten as
Let \({\bar{p}}_{k}, \forall k = i,j\) be the transition probability distribution regarding the target arc \(e_{k}\) and its reciprocal set \(R_{k}\) which does not rely on the starting node \(v_{h}\). The above inequality is rewritten as
Here, we prove the last inequality by showing that, in the above setting, (a) \(R_{j}\) minimizes the distance (i.e., \({\mathcal{L}}(p_{h, j}, p_{h, j}^{*}) \equiv {\mathcal{L}}({\bar{p}}_{j}, {\bar{p}}_{j}^{*})\)), and (b) the distance is inevitably larger in all other cases. By the assumptions, for the target arc \(e_j\), the corresponding reciprocal arcs have a head set of size 2, and the number of reciprocal arcs equals the number of tail nodes (i.e., \(|R_{j} \vert = |T_{j} \vert\)). In addition, the head set of every reciprocal arc is a subset of the tail set \(T_{j}\) of \(e_j\) and \(T'_{j} = H_{j}\). Thus, Eq. (3), which is about \(e_{j}\), implies that every node \(v \in T_{j}\) in the tail set is included in two of the head sets of the reciprocal arcs. Because of these facts, the transition probability can be written as
Note that this is identical to the optimal transition probability.
Now, consider the case of \(e_{i}\). Here, due to Eq. (2), the transition probability cannot be uniform as in the case of \(e_{j}\). Assume a node \(v'_{i1} \in T_{i}\) belongs to reciprocal arcs’ head sets \(K \ne 2\) times. Then, the transition probability assigned to \(v'_{i1}\) is \(p(v'_{i1}) = \frac{1}{2|T_{i} \vert } \times K \ne \frac{1}{|T_{i} \vert }\). This result indicates that the transition probability of \(e_{i}\) is not the only optimal one. Thus, the following inequality holds:
\(\square\)
1.2.5 Proof of the Fact that HyperRec Satisfies Axioms 5-8
Proposition 5.
HyperRec (i.e., defining \(r(e_i, R_i)\) as in Eq. (5)) satisfies Axiom 5.
Proof
This can be shown by using the known range of the probabilistic distance \({\mathcal{L}}(p,q)\) as follows:
\(\square\)
Proposition 6.
HyperRec (i.e., defining r(G) as in Eq. (1)) satisfies Axiom 6.
Proof
Recall that the hypergraph level reciprocity of HyperRec is defined as
By the assumption, the size of every arc’s head set is 1, and thus each \(r(e_{i}, R_{i})\) is rewritten as
where \(\{v_{h}\} = H_{i}\). Here, the optimal transition probability is
For a case where the perfectly reciprocal opponent \(e'_i= \langle H'_{i} = T_{i}, T'_{i} = H_{i} \rangle\) of \(e_i\) exists (i.e., \(e'_{i} \in E\)), \(R_i=\{e'_i\}\) maximizes \(\left( \frac{1}{|R_{i} \vert }\right) ^{\alpha } \left( 1 - \frac{{\mathcal{L}}{(p_{h}, p^{*}_{h})}}{{\mathcal{L}}_{max}}\right)\) since it minimizes both \(|R_{i} \vert\) (to 1) and \({\mathcal{L}}{(p_{h}, p^{*}_{h})}\) (to 0). Thus, \(r(e_{i})\) becomes 1.
For a case where the perfectly reciprocal opponent \(e'_i= \langle H'_{i} = T_{i}, T'_{i} = H_{i} \rangle\) of \(e_i\) does not exist (i.e., \(e'_{i} \notin E\)). Then, for each arc \(e_k\) in the reciprocal set \(R_i\), since \(|H_{i} \vert =|T_{i} \vert =|H_{k} \vert =|T_{k} \vert =1\), \(H_{i}\cap T_{k}=\emptyset\) or \(T_{i}\cap H_{k}=\emptyset\) should hold. Thus, there is no transition possibility from any node in \(\in H_{i}\) to any node in \(T_{i}\), and as a result, \(p_h(v)=0\), \(\forall v_h\in H_{i}\), \(\forall v\in T_{i}\). Hence, for every \(R_{i}\subseteq E\), by A-II, \({\mathcal{L}}{(p_{h}, p^{*}_{h})}={\mathcal{L}}_{max}\), and thus \(r(e_i)=0\).
If we consider both cases together, \(r(e_{i})\) becomes an indicator function that gives 1, if there exists the perfectly reciprocal opponent, and 0, otherwise. Formally,
where \(\mathbbm {1}(\text{TRUE}) = 1\) and \(\mathbbm {1}(\text{FALSE}) = 0\); and this is identical to the digraph reciprocity measure (Newman et al. 2002; Garlaschelli and Loffredo 2004), i.e., \({|E^{\leftrightarrow } \vert }/{|E \vert }\). \(\square\)
Proposition 7.
HyperRec (i.e., defining r(G) as in Eq. (1)) satisfies Axiom 7.
Proof
Recall that the hypergraph-level reciprocity of HyperRec is defined as
By Axiom 5, \(0\le r(e_{i} \, R_{i})\le 1\) for any \(e_{i}\) and \(R_{i}\). This implies \(0 \le \sum _{i=1}^{|E \vert } r(e_{i}) \le |E \vert\), which is equivalent to \(0 \le r(G)=\frac{1}{|E \vert } \sum _{i=1}^{|E \vert } r(e_{i}) \le 1\). \(\square\)
Proposition 8.
HyperRec (i.e., defining r(G) as in Eq. (1)) satisfies Axiom 8.
Proof
We first show that the maximum value of HyperRec is attainable under the given condition of Axiom 8. From an arbitrary hypergraph G, let \(E'=\{e_{i} \in E: \langle T_{i}, H_{i}\rangle \notin E \}\) be the set of arcs whose perfectly reciprocal opponents do not exist in G. Let \(E^{add} = \bigcup _{e_{ki} \in E^{'}} {\langle T_{i}, H_{i} \rangle }\) be the set of perfectly reciprocal opponents of the arcs in \(E'\). If we add \(E^{add}\) to G, which gives \(G^{+}=(V,E^{+}=E \cup E^{add})\), then for each arc \(e_i\in E^{+}\), the perfectly reciprocal opponent \(e'_i= \langle H'_{i} = T_{i}, T'_{i} = H_{i} \rangle\) of \(e_i\) exists (i.e., \(e'_{i} \in E^{+}\)), and thus
which implies that \(r(G^{+})=\frac{1}{|E^+ \vert } \sum _{i = 1}^{|E^+ \vert } r(e_{i})=1\).
We now show that the minimum value of HyperRec is attainable under the given condition of Axiom 8. From an arbitrary hypergraph \(G=(V,E)\), let \(E^{-} = \{e_{i}\}\) be a hyperarc set that contains any single hyperarc \(e_{i} \in E\). For a hypergraph \(G^{-} = (V, E^{-})\), the only possible choice of \(R_{i}\) is \(R_{i} = \{e_{i}\}\) since \(R_{i}\) should be a non-empty set and there exists only a single hyperarc \(e_{i}\) in \(G^{-}\), and thus
which implies that \(r(G^{-}) = r(e_{i}) = 0.\)\(\square\)
1.3 Proof of Theorem 2
Proof
Refer to Sect. 3.4 for the definition of \(\Phi _{i}(\langle H'_{i}, T'_{i}\rangle )\). Given a target arc \(e_{i}\), let \(e_{a}\) and \(e_{b}\) be two arcs in a set \(\Phi _{i}(\langle H'_{i}, T'_{i}\rangle )\) where \(A=|H_a \vert \le B=|H_b\vert\). Consider an arbitrary reciprocal set \(R_{i}\subseteq E\). We use \(p_{(i, a, h)}\) to denote the probability distribution at each node \(v_{h} \in H'_{i}\) when the reciprocal set is \(R_{i} \cup \{e_{a}\}\). Then, probabilistic distance between \(p_{(i, a, h)}\) and \(p^{*}_{h}\) is rewritten as
where \(E'_{(i, h)} = \{e_{k} \in R_{i}:v_{h} \in T_{k}\}\), \(K = |E'_{i,h} \vert\), and \(q_{(i, h)}\) is the probability distribution at each node \(v_{h} \in H'_{i}\) when the reciprocal set is \(R_{i}\). It should be noticed that the definition of \(H'_{i}\) and \(T'_{i}\), \(H'_{i} = T_{a} \cap H_{i} = T_{b} \cap H_{i}\) and \(T'_{i} = H_{a} \cap T_{i} = H_{b} \cap T_{i}\) hold. In the same way, we define \(p_{(i,b,h)}\) is as the probabilistic distribution at each node \(v_{h} \in H'_{i}\) when \(R_{i} \cup \{e_{b}\}\) is the reciprocal set. Then, \({\mathcal{L}}(p_{(i, b, h)}, p^{*}_{h})\) can be rewritten as in Eq. (26).
We prove the theorem by showing that \({\mathcal{L}}(p_{(i, a, h)}, p^{*}_{h}) \le {\mathcal{L}}(p_{(i, b, h)}, p^{*}_{h})\) holds for every \(v_h\in H'_{i}=H_{i} \cap T_{a} = H_{i} \cap T_{b}\). Note that the second and third terms of the RHS do not depend on \(e_{a}\) and \(e_{b}\), and they are identical in \({\mathcal{L}}(p_{(i, a, h)}, p^{*}_{h})\) and \({\mathcal{L}}(p_{(i, b, h)}, p^{*}_{h})\). Thus, we rewrite \({\mathcal{L}}(p_{(i, a, h)}, p^{*}_{h}) \le {\mathcal{L}}(p_{(i, b, h)}, p^{*}_{h})\) as
For simplicity, let \(|T_{i} \vert = T\) abd \(|T'_{i} \vert = F\). Then, the above inequality is rewritten as
Let \(v' = {{\,\textrm{argmax}\,}}_{v \in T'_{i}} \ell (\frac{1}{T}, \frac{Kq_{(i, h)}(v)}{K+1} + \frac{1}{A(K+1)}) - \ell (\frac{1}{T}, \frac{Kq_{(i, h)}(v)}{K+1} + \frac{1}{B(K+1)})\), and let \(p'=q_{(i,h)}(v')\). Then, the following inequality holds:
Thus, the inequality (27) is implied by
By unfolding \(\ell\) in the LHS and dividing both sides by F, the inequality (28) is rewritten as
where \(P = \frac{Kp'}{K+1}\). Let \(P_{A} = P' + \frac{1}{A(K+1)}\) and \(P_{B} = P' + \frac{1}{B(K+1)}\), where \(P_{A} \ge P_{B}\). Then, by cancelling out all identical terms, the above inequality is simplified as
If we multiply by T both sides, this inequality is implied by the two following inequalities:
The inequality (29) is trivial since \(P_{A} \ge P_{B}\). For the inequality (30), the numerator and the denominator are in the form of \(f(n) = (1+\frac{1}{n})^{n}\), which is a non-decreasing function. Thus, the denominator is always greater than or equal to the numerator, thus satisfying the inequality. \(\square\)
1.4 Proof of Corollary 1
Proof
Since HyperRec measures a weighted average of the probabilistic distances that are defined for each head set node of the target arc, we consider each head set node \(v_h\in H_i\) of the target arc \(e_i\). Since the tail set size of every arc is identical to 1 (i.e., \(|T_{i} \vert = 1\)), there exists at most one arc in \(\Psi _{i}\) that covers a specific node \(v_{h}\) in the head set \(H_i\) of the target arc, i.e., \(\forall v_h \in H_i\),
Let \(v'\) be the target arc’s tail set node, i.e., \(T_{i}=\{v'\}\). Then, by the definition of \(\Psi _{i}\), \(v'\) is included in the head set of every \(e_{k} \in \Psi _{i}\), i.e., \(\forall e_{k} \in \Psi _{i}\),
Equations (31) and (32) imply that, \(\forall e_{k} \in R_{i} \subseteq \Psi _{i}\), Eq. (33) holds.
Let \(A = |H_{k1} \vert\) and \(B = |H_{k2} \vert\). Then, \(\forall e'_{h1},e'_{h2} \in R_{i} \subseteq \Psi _{i} \text{ s.t. } |H'_{h1} \vert \le |H'_{h2} \vert\),
We prove the inequality (34) by showing that both first and second terms of the LHS of the inequality (35) are smaller than or equal to 0. Since \(A \le B\), it is trivial that the first term is smaller than or equal to 0. The second term has a functional form of \(\log (f(x)/f(x'))\) where f(x) is a non-increasing function and \(x\le x'\), and thus the second term is also smaller than or equal to 0.
In addition, let \(e'_{h} \in \Psi _{i}\) be the only arc where \(T'_{h}=\{v_{h}\}\). Then, the probabilistic distance at a each head set node \(v_{h}\) depends on whether \(e'_{h}\) is in any reciprocal set \(R_i\subseteq \Psi _{i}\) as follows:
where \(p'_{h}\) is the probability distribution at \(v_h\) when the reciprocal set \(R_i=\{e'_h\}\), \(p^{*}_{h}\) is the optimal probability distribution at a node \(v_{h} \in H_{i}\).
By Eqs. (31) and (36), for any \(R'_{i} = \{e'_{h1}, \ldots , e'_{hk}\}\subseteq \Psi _{i}\), Eq. (37) holds.
where \(T'_{hj}=\{v_{hj}\}\) for every \(j\in \{1,\ldots , k\}\). In addition, Eq. (37) and the inequality (34) imply that drawing k reciprocal arcs from \(\Psi _{i}\) in ascending order of their head set size achieves the maximum reciprocity for fixed k. Let \(\Gamma _{i,k}\) be such a reciprocal set. Formally, if we let \(\Gamma _{i, k}\) be a subset of \(\Psi _{i}\) such that \(|\Gamma _{i,k} \vert = k \text{ and }|H_{s} \vert \le |H_{t} \vert , \ \forall e_{s} \in \Gamma _{i, k}, \ \forall e_{t} \in \{\Psi _{i} {\setminus } \Gamma _{i, k}\}\), then \({{\,\textrm{argmax}\,}}_{R_{i} \subseteq \Psi _{i} s.t. |R_{i} \vert = k} r(e_i,R_i) = \Gamma _{i, k}\) holds. \(\square\)
Appendix 2: Limitations of baseline measures
In this section, we show why several baseline measures fail in satisfying some of Axiom 1–8. Below, We use \(G_{i}=(V_{i},E_{i})\) and \(G_{j}=(V_{j},E_{j})\) to denote the hypergraphs on the left side and the right side, respectively, of each subfigure of Fig. 2.
1.1 Violations of Axiom 3
We show how several baseline measures violate Axiom 3. In Fig. 2d, e, \(r(e_{i}, R_{i}) < r(e_{j}, R_{j})\) should hold in order to satisfy Axiom 3. However, we numerically verify that \(r(e_{i}, R_{i}) = r(e_{j}, R_{j})\) hold for some baseline measures, which violates Axiom 3.
B1. Pearcy et al. (2014): They use clique expansion, which transforms every hyperedge of a hypergraph to a clique of a pairwise graph (e.g., \(\langle \{v_{1} \}, \{v_{2}, v_{3} \} \rangle \rightarrow \{\langle \{v_{1}\}, \{v_{2}\}\rangle , \langle \{v_{1}\}, \{v_{3}\}\rangle \}\)). Through this process, the original hypergraph is transformed into a weighted digraph (see Sect. 2.2). Since Pearcy et al. (2014) do not propose any arc-level reciprocity, we compare its hypergraph-level reciprocity for counterexamples regarding Axiom 3. That is, we compare \(r(G_{i})\) and \(r(G_{j})\) in Fig. 2d, e. As reported in Table 10, \(r(G_{i}) = r(G_{j}) = 0.2093\) and \(r(G_{i}) = r(G_{j}) = 0.2093\) hold in Fig. 2d, e, respectively, violating Axiom 3.
B2. Ratio of covered pairs: We compare the ratio of covered pairs (B2), which is arc-level reciprocity, in Fig. 2d, e. As reported in Table 10, \(r(e_{i}, R_{i}=\{e'_{i1},e'_{i2}\}) = r(e_{j}, R_{j} = \{e'_{j}\}) = 0.5625\) and \(r(e_{i}, R_{i}) = r(e_{j}, R_{j}) = 0.5625\) hold in Fig. 2d in Fig. 2e, respectively, which violates Axiom 3.
B5. HyperRec w/o size penalty: As reported in Table 10, \(r(e_{i}, R_{i}=\{e'_{i1},e'_{i2}) = r(e_{j}, R_{j} = \{e'_{j}\}) = 0.6446\) holds in Fig. 2e, which violates Axiom 3.
1.2 Violations of Axiom 4
We show how several baseline measures violate Axiom 4. In Fig. 2f, \(r(e_{i}, R_{i}) < r(e_{j}, R_{j})\) should hold in order to satisfy Axiom 4. However, we numerically verify that \(r(e_{i}, R_{i}) = r(e_{j}, R_{j})\) hold for some baseline measures, which violates Axiom 4.
B2. Ratio of covered pairs: As reported in Table 11, \(r(e_{i}, R_{i}=(E_{i}{\setminus } \{e_{i}\}) = r(e_{j}, R_{j}=E_{j}{\setminus } \{e_{j}\}) = 1.00\) holds in Fig. 2f, which violates Axiom 4.
B3. Penalized ratio of covered pairs: As reported in Table 11, \(r(e_{i}, R_{i}=(E_{i}{\setminus } \{e_{i}\}) = r(e_{j}, R_{j}=E_{j}{\setminus } \{e_{j}\}) = 0.25\) holds in Fig. 2f, which violates Axiom 4.
1.3 Violations of Axiom 5
B4. HyperRec w/o normalization: In order to satisfy Axiom 5, reciprocity should always lie in a fixed finite range. Here, we demonstrate that (B4) violates Axiom 5 by showing that its reciprocity value can become infinite. Recall that (B4) is defined as
Consider a case where \(R_{i}=\{e'_{i}=\langle T_{i}, H_{i}\rangle \}\). Then, for each \(v_{h} \in H_{i}\), \({\mathcal{L}}(p_{h}, p^{*}_{h})=0\) holds. In turn, Eq. (38) becomes \(r(e_{i},R_{i})=\vert H_{i}\vert\). In this case, as \(\vert H_{i}\vert\) approaches infinity, \(r(e_{i},R_{i})\) also becomes infinite. Since the value of (B4) does not lie in a fixed finite range, (B4) violates Axiom 5.
1.4 Violations of Axiom 6
B1. Pearcy et al. (2014): Consider the digraph in Fig. 8 The digraph reciprocity of the digraph is \(r(G) = \frac{2}{3}\) since \(E = \{ e_{1},e_{2},e_{3} \}\), and \(E^{\leftrightarrow } = \{e_{1}, e_{2}\}\). In this case, however, the clique-expanded adjacency matrices of the digraph and the perfectly reciprocal hypergraph are
Thus, according to the definition in Pearcy et al. (2014), the reciprocity becomes \(\frac{2}{4} = 0.5\) because \(tr({\bar{A}}^{2}) = 2\) and \(tr(\bar{A'}^{2}) = 4\). Since \(\frac{2}{4} \ne \frac{2}{3}\), (Pearcy et al. 2014) (B1) violates Axiom 6.
B6. HyperRec with all arcs as reciprocal set: According to Axiom 6, a hypergraph reciprocity value should equal 2/3 \(\approx\) 0.6667 in Fig. 8. If we let \(\alpha = 1\), then the overall hypergraph-level reciprocity (r(G)) based on HyperRec with all arcs as the reciprocal set (B6) is
which violates Axiom 6.
1.5 Violations of Axiom 8
B6. HyperRec with all the arcs as reciprocal set: Even when there exists the perfect reciprocal opponent of a specific arc, the transition probability cannot be identical to the optimal transition probability if there exists another inversely overlapping arc (see the target arc \(e_{2}\)’s case in Fig. 8).
B7. HyperRec with inversely overlapping arcs as reciprocal set: As in the previous case, if there exist multiple inversely overlapping arcs, all of them are included in the reciprocal set. As a result, for such arcs, the cardinality penalty term gets smaller than 1 (i.e., \((1/|R_{i} \vert )^{\alpha } < 1\)), resulting in \(r(e_{i}, R_{i}) < 1\). Consequently, the overall hypergraph reciprocity becomes smaller than 1.
Appendix 3: Data description
In this section, we provide the sources of the considered datasets and describe how we preprocess them.
1.1 Metabolic datasets
We use two metabolic hypergraphs, iAF1260b and iJO1366, which are provided by Yadati et al. (2020). They are provided in the form of directed hypergraphs, and they do not require any pre-processing. We remove one hyperarc from each dataset since their head set or tail set is abnormally large. Specifically, the size of their head sets is greater than 20, while the second largest one is 8. Each node corresponds to a gene, and each hyperarc indicates a metabolic reaction among them. Specifically, a hyperarc \(e_{i}\) indicates that a reaction among the genes in the tail set \(T_{i}\) results in the genes in the head set \(H_{i}\).
1.2 Email datasets
We use two email hypergraphs, email-enron and email-eu. The Email-enron dataset is provided by Chodrow and Mellor (2020). We consider each email as a single hyperarc. Specifically, the head set is composed of the receiver(s) and cc-ed user(s), and the tail set is composed of the sender. The Email-eu dataset is from SNAP (Leskovec and Krevl 2014). The original dataset is a dynamic graph where each temporal edge from a node u to a node v at time t indicates that u sent an email to v at time t. The edges with the same source node and timestamp are replaced by a hyperarc, where the tail set consists only of the source node and the head set is the set of destination nodes of the edges. Note that every hyperarc in these datasets has a unit tail set, i.e., \(|T_{i} \vert = 1, \forall i = \{1, \ldots , |E \vert \}\).
1.3 Citation datasets
We use two citation hypergraphs, citation-data mining and citation-software, which we create from pairwise citation networks, as suggested by Yadati et al. (2021). Nodes are the authors of publications. Assume that a paper A, which is co-authored by \(\{v_{1}, v_{2}, v_{3}\}\), cited another paper B, which is co-authored by \(\{v_{4}, v_{5}\}\). Then, this citation leads to a hyperarc where the head set is \(\{v_{4}, v_{5}\}\) and the tail set is \(\{v_{1}, v_{2}, v_{3}\}\). As pairwise citation networks, we use subsets of a DBLP citation dataset (Sinha et al. 2015). The subsets consist of papers published in the venues of data mining and software engineering, respectively.Footnote 7 In addition, we filter out all papers co-authored by more than 10 authors to minimize the impact of such outliers.
1.4 Question answering datasets
We use two question answering hypergraphs, qna-math and qna-server. We create directed hypergraphs from the log data of a question answering site, stack exchange, provided at Archive (2022). Among various domains, we choose math-overflow, which covers mathematical questions, and server-fault, which treats server related issues. The original log data contains the posts of the site, and one questioner and one or more answerers are involved with each post. We ignore all posts without any answerer. We treat each user as a node, and we treat each post as a hyperarc. For each hyperarc, the questioner of the corresponding post composes the head set, and the answerer(s) compose the tail set. Note that every hyperarc in these datasets has a unit head set, i.e., 1 \(|H_{i} \vert = 1, \forall i = \{1, \ldots , |E \vert \}\).
1.5 Bitcoin transaction dataset
We use three bitcoin transaction hypergraphs, bitcoin-2014, bitcoin-2015, and bitcoin-2016. The original datasets are provided by Wu et al. (2021), and they contain first 1,500,000 transactions in 11/2014, 06/2015, and 01/2016, respectively. We model each account as a node, and we model each transaction as a hyperarc. As multiple accounts can be involved in a single transaction, the accounts from which the coins are sent compose the tail set, and the accounts to which the coins sent compose the head set. We remove all transactions where the head set and the tail set are exactly the same.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kim, S., Choe, M., Yoo, J. et al. Reciprocity in directed hypergraphs: measures, findings, and generators. Data Min Knowl Disc 37, 2330–2388 (2023). https://doi.org/10.1007/s10618-023-00955-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10618-023-00955-3