Abstract
Statistical relational learning (SRL) and graph neural networks (GNNs) are two powerful approaches for learning and inference over graphs. Typically, they are evaluated in terms of simple metrics such as accuracy over individual node labels. Complex aggregate graph queries (AGQ) involving multiple nodes, edges, and labels are common in the graph mining community and are used to estimate important network properties such as social cohesion and influence. While graph mining algorithms support AGQs, they typically do not take into account uncertainty, or when they do, make simplifying assumptions and do not build full probabilistic models. In this paper, we examine the performance of SRL and GNNs on AGQs over graphs with partially observed node labels. We show that, not surprisingly, inferring the unobserved node labels as a first step and then evaluating the queries on the fully observed graph can lead to suboptimal estimates, and that a better approach is to compute these queries as an expectation under the joint distribution. We propose a sampling framework to tractably compute the expected values of AGQs. Motivated by the analysis of subgroup cohesion in social networks, we propose a suite of AGQs that estimate the community structure in graphs. In our empirical evaluation, we show that by estimating these queries as an expectation, SRLbased approaches yield up to a 50fold reduction in average error when compared to existing GNNbased approaches.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Large realworld graphs in domains such as social media (e.g., friendship and follower graphs), computational biology (e.g., protein interaction networks), and IoT (e.g., sensor networks) often have missing information that needs to be inferred. Making use of the graph, or relational structure, can help immensely in accurately inferring missing values (Sen et al., 2008; Neville & Jensen, 2002). Statistical relational learning (SRL) (Getoor & Taskar, 2007; De Raedt et al., 2016) and graph neural networks (GNNs) (Gilmer, 2017; Hamilton et al., 2017; Kipf & Welling, 2017; Veličković et al., 2018; Qu et al., 2019) are two powerful machine learning approaches for inferring the missing node labels. These approaches have been shown to be quite effective; however, current literature has largely focused on maximizing locally decomposable metrics such as node label accuracy over individual nodes.
Unfortunately, good performance on these locally decomposable metrics does not necessarily translate to accurate estimation of global graph properties. Properties such as node centrality are important in the analysis of graph phenomena such as influence maximization and resilience to attacks, and involve all the nodes and edges in the graph. Global graph properties can be computed using complex graph queries. While many such graph properties have been proposed (Scott, 1988; Wasserman and Faust, 1994; Cook & Holder, 2006; Rajaraman & Ullman, 2011), along with efficient algorithms to estimate them (Shi et al., 2015; Liu et al., 2018; Wu et al., 2014; Qiang et al., 2014; Dunne & Shneiderman, 2013), the task of estimating these queries when there is missing information, such as node labels, has not received much attention. In such graphs, we need to combine the tasks of estimating the queries with the inference of missing information such as node labels. These complex queries generally involve many nodes and edges and require joint reasoning over multiple node labels to compute them.
In this work, we introduce the notion of aggregate graph queries (AGQs), and argue that researchers should focus more attention on accurately estimating these richer queries. In order to support this, we introduce a suite of useful AGQs that measure subgroup cohesion in graphs (Wasserman & Faust, 1994). We study the effectiveness of SRL and GNNbased approaches in computing AGQs on graphs with missing node labels. For approaches that infer the best possible values for the missing node labels, we propose a point estimate approach, where we first infer the missing values, and then compute the query. For approaches that infer the joint distribution over all the missing node labels, we propose an expectationbased approach that estimates the query as an expectation over the joint distribution. Further, to compute the expectation tractably using Monte Carlo approximation, we propose a novel sampling approach for probabilistic soft logic (PSL), one of the SRL approaches that we study.
We include a theoretical analysis that shows that the point estimate approach leads to suboptimal estimates even for simple graphs with just two nodes. We also provide an extensive empirical analysis showing the extent to which this happens over richer queries over realworld data. Further, we analyze the effect of training data size on the performance of these approaches.
The contributions of our paper include:

We introduce a suite of practical AGQs that measures the key graph property of subgroup cohesion and study the effectiveness of SRL and GNNs in estimating them.

We show that first inferring the missing values and then estimating the AGQs leads to poor performance.

We propose a novel MetropoliswithinGibbs sampling framework, MIG, for PSL that is faster than existing SRL samplers by a factor of up to three.

Through experiments on three benchmark datasets, we show that computing aggregate properties as an expectation outperforms point estimate approaches up to a factor of 50.

The runtime experiments show that the proposed MIG approach for PSL is up to 3 times faster than other SRL sampling approaches.
2 Background
In this section, we briefly review several important statistical relational learning and graph neural network based approaches.
2.1 Statistical relational learning
Statistical relational learning (SRL) or statistical relational learning and artificial intelligence (StarAI) methods combine probabilistic reasoning with knowledge representations that capture the structure in the domain (Getoor & Taskar, 2007; De Raedt et al., 2016). SRL frameworks typically define a declarative probabilistic model or theory consisting of weighted firstorder logic rules. The rules can encode probabilistic information about the attributes and labels of nodes, and the existence of edges between nodes. Intuitively, the weight of a rule indicates how likely it is that the rule is true in the world. The higher the weight, the higher is the probability of rule being true.
SRL approaches can be broadly classified into prooftheoretic or modeltheoretic approaches based on the inference technique used (De Raedt et al., 2020). In prooftheoretic approaches, a sequence of logical reasoning steps or a proof is generated and this is used to define a probability distribution. Probabilistic logic programs (De Raedt & Kimmig, 2015) and Stochastic Logic Programs (Muggleton, 1996) are some popular prooftheoretic approaches. In a modeltheoretic approach, the model is used to generate a graphical model or a ground weighted logical theory through a process called grounding. Inference is then performed on the ground model. Probabilistic soft logic (Bach et al., 2017), Markov logic networks (Richardson & Domingos, 2006) and Bayesian logic programs (Kersting & De Raedt, 2007) are some popular modeltheoretic based approaches.
2.1.1 Markov logic networks
Markov logic networks (MLN) (Richardson & Domingos, 2006; Niu et al., 2011; Venugopal et al., 2016) are a notable modeltheoretic SRL framework. A MLN induces an undirected graphical model using the set of logical rules by a process known as grounding. In grounding, the variables in the rules are replaced with values from the data. The atoms in the rules, where the variables are replaced with the values, are called ground atoms and are modeled as Boolean random variables (RVs) in the undirected graph. The ground rules represent cliques in the graph. Based on the data, some RVs are observed (X) and some are unobserved (Y). The probability distribution represented by the graphical model over the unobserved random variables Y is given by:
where \(f_{i}(X, Y)\) is the potential defined using Boolean satisfiability, \(w_{i}\) is the weight, N is the number of ground formulas and Z is the normalization constant. \(f_{i}(X, Y)\) takes the value 1 if the ground formula is satisfied, and 0 otherwise.
2.1.2 Probabilistic soft logic
Probabilistic soft logic (PSL) (Bach et al., 2017) is another recently introduced SRL framework. Similar to MLNs, PSL induces an undirected graphical model using the set of logical rules. Unlike MLNs, the ground atoms in PSL are continuous and defined over the range [0, 1]. For the potential functions, PSL uses a continuous relaxation of Boolean logic, which results in hinge functions instead of Boolean satisfiability. The probability distribution represented by the graphical model over the unobserved random variables Y is given by:
where \(\phi _{i}(X, Y)\) is the potential defined using Lukasiewicz logic, \(w_{i}\) is the weight, N is the number of ground formulas and Z is the normalization constant. The potential function \(\phi _{i}(X, Y)\) takes the form of a hinge and makes the MAP inference in PSL convex.
2.2 Graph neural networks
GNNs build on top of neural networks to learn nonlinear representation for each node in a graph. These node representations are learned by encoding information about the local graph structure (Kipf & Welling, 2017; Veličković et al., 2018), edge labels (Schlichtkrull et al., 2018), adjacent node labels (Qu et al., 2019; Pham et al., 2017) and external domain knowledge (Zhang et al., 2020; Qu & Tang, 2019; Harsha Vardhan et al., 2020). GNNs can be broadly classified into nonprobabilistic and probabilistic approaches based on whether they explicitly model the joint distribution.
Nonprobabilistic approaches learn a nonlinear representation for each node in a graph using a neural network and use them to classify nodes independently. These approaches do not explicitly model the joint probability distribution. Graph convolutional networks (GCNs) (Kipf & Welling, 2017) , relational GCN (Schlichtkrull et al., 2018), graph attention networks (GATs) (Veličković et al., 2018) are some popular GNN approaches belonging to this category.
Recently, several probabilistic approaches have been proposed that learn a joint distribution over the unobserved node labels in a graph. The distribution is parameterized using a graph neural network. GMNN (Qu et al., 2019), ExpressGNN (Zhang et al., 2020), pGAT Harsha (Vardhan et al., 2020), pLogicNet (Qu & Tang, 2019) and Column Networks (Pham et al., 2017) are some popular probabilistic approaches. To make the inference tractable, approaches such as (Qu et al., 2019; Qu & Tang, 2019), and (Zhang et al., 2020) use variational expectation maximization (Neal & Hinton, 1998). In these approaches the joint distribution is approximated with a meanfield variational distribution that is more tractable for inference. (Pham et al., 2017) employ an approximate, multistep, iterative method similar to stacked learning, where the intermediate marginal probabilities for a node are used as relational features in the next step.
2.2.1 Graph convolutional networks
Graph convolutional network (GCN) (Kipf & Welling, 2017) is a popular nonprobabilistic GNN approach. GCNs iteratively update the representation of each node by combining each node’s representation with its neighbors’ representation. The propagation rule to update the hidden representation of a node is given by:
where \(H^{(l)}\) denotes the representation at layer l, \(\tilde{D}\) represents the degree matrix, \(\tilde{A}\) represents the adjacency matrix with selfloop, W represents the weights, and \(\sigma\) denotes an activation function, such as the ReLU. The final representations are fed into a linear softmax layer classifier for label prediction.
2.2.2 Graph attention networks
Graph attention networks (GATs) (Veličković et al., 2018) are similar to GCNs and use selfattention while combining the representation of each node with its neighbors. This allows the model to assign different weights to each of its neighbors’ representations. The propagation rule for GAT is given by:
where \(h_i^{(l)}\) is the representation of node i at layer l, W is the weight matrix, \(\mathcal {N}\) is the set of neighbors and \(\alpha\) is the attention weights.
2.2.3 Graph Markov neural networks
Graph Markov neural networks (GMNNs) (Qu et al., 2019) is a recently introduced probabilistic approach. GMNNs build on graph neural networks such as GCNs or GATs by adding a second neural network to capture the latent dependencies in the inferred data. The pair of neural networks are trained using a variational EM algorithm. In the Estep, the object representations are learned by the first neural network. In the Mstep, the latent dependencies are learned by the other neural network.
3 Problem definition
Consider a graph \(G = (V, \mathcal {E})\), where V is the set of nodes and \(\mathcal {E}\) is the set of edges. Each node \(i \in V\) is associated with a set of attributes denoted by \(\mathbf {a}_{i}\) and a label denoted by \(c_{i} \in \{1,\dots ,K\}\). All nodes and edges of the graph are observed and the node labels are partially observed. The set of observed node labels is denoted by \(C_{o}\), unobserved node labels by \(C_{u}\), and \(C_{o} \cup C_{u} = C\). As an example, consider a computer science citation graph.
Example 1
In a computer science citation graph \(G_c\), the nodes \(V_c\) represent computer science documents and the edges \(E_c\) represent citation links between these documents. The documents in the graph can belong to several categories such as AI, Systems, Compilers and Databases. The document category is represented as a node labels \(C_c\). The contents of the document i such as the tokens in the abstract are represent by the node attributes \(a_i\). The documents with observed categories correspond to \(C_{o}\). Documents with categories that need to be inferred are correspond to \(C_{u}\).
Definition 1
(Graph queries) A graph query GQ is a Boolean expression over nodes, edges and node labels.
The most common form of graph queries are those that define a subgraph pattern. A graph query GQ, when evaluated on a graph G with node labels C, returns a set of subgraphs that satisfy the Boolean expression and is denoted by GQ(G, C). We refer to graph queries that involve a single node or an edge as simple graph queries, and queries that involve multiple nodes and/or edges as complex graph queries.
Example 2
For the citation graph in Example 1, we might want to infer how dense the citation links are within the categories. The GQ that returns the set of all citation links between documents that belong to the same category is given by:
The Boolean expression is true when a pair of documents have a citation link between them and also belong to the same category.
Definition 2
(Aggregate graph queries) Aggregate graph queries (AGQs) are a class of graph queries that compute an aggregate function on the set of subgraphs that match the Boolean expression, i.e., an AGQ \(Q(G,C) = Agg(GQ(G, C))\) where Agg is an aggregate function.
For example, Count is an aggregate function that returns the number of subgraphs in the set. AGQs can be considered as a mapping from the graph G and the node labels C to a real number, i.e., \(Q: (G, C) \rightarrow \mathbb {R}\).
Example 3
For the citation graph in Example 1, one way to summarize the density of citations within the categories is to count the number of such citations. The aggregate graph query representing the number of citation links between documents that belong to the same category is given by:
Definition 3
(Aggregate graph query estimation) Given a graph G, the observed and unobserved node labels \(C_{o}\), \(C_{u}\), and an aggregate graph query Q, the task of aggregate graph query estimation is to compute the value of \(Q(G, C_{o}, C_{u})\).
Example 4
For the citation graph in Example 1, the aggregate graph query in Example 3 cannot be computed directly due the missing document categories in \(C_{u}\). We need to first infer the category labels before computing the AGQ.
4 Aggregate graph queries
In this section we motivate and introduce several complex AGQs that are useful in analyzing the community structure (also called cohesive subgroups) in graphs. Analyzing the community structure of a graph is necessary to understand the social forces operating in a network and is widely used in social sciences, particularly in social psychology and sociology (Wasserman & Faust, 1994). One of the approaches to quantitatively measure this is the nodal degree approach that computes various statistics regarding the membership of a node and its adjacent nodes to various communities.
We define five different AGQs that can be used to quantitatively measure the community structure of a graph. These queries compute statistics of the entire graph using node and edge frequencies, between nodes that belong to the same category, across different categories and also relative frequency between and across categories. We also include an AGQ that measures the accuracy of the predicted node labels to show that AGQs can also capture the traditional locally decomposable metrics. The queries are also of varying complexity, where the complexity is the number of nodes jointly involved in the query. Query Q0 involves a single node and queries Q1 and Q2 involve two nodes. Queries Q3 to Q5 are more complex and involve all the neighbors of a node. Q1 and Q2 are based on edge frequencies and Q3 to Q5 are based on node label frequency. We illustrate these queries using the citation graph introduced in Example 1.
[Q0]: Accuracy: This query measures the number of documents with the correct categories assigned to them. It is a locally decomposable query and is given by:
where \(c^*_{i}\) is the ground truth label.
[Q1]: Edge Cohesion: This query measures the number of citation links between documents i, j that belong to the same category. It is given by:
A citation graph with a small number of large, tightknit categories tends to have a large number of citations between documents of the same categories.
[Q2]: Edge Separation: This query measures the number of citation links between documents i, j that belong to the different category. It is given by:
A citation graph with large number of small communities tends to have a large number of citations between documents across different categories.
[Q3]: Diversity of Influence: This query measures the number of documents i in the graph that are connected to at least half of the different document categories. It is given by:
The inner Count computes the number of distinct document categories that a document i is cited by and the outer Count computes the number of documents that are connected to at least half of the document categories. This query is computes the number of kcore nodes in the graph where k is set to half of the number of categories.
[Q4]: Exterior Documents: This query measures the number of documents i that have more than half of its neighbors belonging to categories other than the documents category, i.e.,
The inner counts compute the number of adjacent documents with different labels and adjacent documents respectively, and the outer count computes the number of such documents in the graph. This AGQ helps measure the monophily in the graph as given by Chin et al. (2019).
[Q5]: Interior Documents: This query measures the number of documents i that have more that half of its neighbors belonging to the same category as the document. It is given by:
Similar to the previous query, the inner counts compute the number of adjacent documents with the same label and adjacent documents respectively and the outer count computes the number of such documents in the graphs.
5 Estimating aggregate graph queries
In this section, we first introduce the pointestimate approach to estimate the AGQs. For models that explicitly learn the joint distribution, we also propose an expectationbased approach.
5.1 Point estimation approach
One approach for aggregate graph query estimation is to impute the locally best possible value for the unobserved node labels \(C_{u}\) and then compute the AGQ. Here, we first learn a model by minimizing a locally decomposable objective function, such as the likelihood of node labels or a loss function defined over the labels, using the graph G, node attributes \(\mathbf {a}_{i}\) and observed node labels \(C_{o}\), and impute values for the unobserved node labels \(C_{u}\) using the learned model. We refer to this approach as a point estimate approach. The point estimate approach is formally defined as follows:
Definition 4
(Point estimate approach) Given an aggregate graph query estimation task, the point estimate approach estimates Q by first imputing the values for \(C_{u}\) (denoted by \(\hat{C}_{u}\)) and then computes a value for Q, i.e., estimate \(\hat{Q} = Q(G, C_{o}, \hat{C}_{u})\).
Nonprobabilistic GNN approaches such as GCNs and GATs model the marginal distribution for each unobserved node label and impute labels using the mode of the distribution. SRL approaches such as PSL and MLNs, and probabilistic GNN approaches such GMNNs model the joint distribution over all unobserved node labels and impute node labels using the mode or the mean of the joint distribution.
5.2 Expectationbased approach
Another approach for aggregate graph query estimation is to define a joint probability distribution over the unobserved node labels and take the expectation of the aggregate graph query Q over the joint distribution. We refer to this approach as the expectationbased approach. Since the range of the aggregate graph query Q is \(\mathbb {R}\), the expectation is welldefined. The expectationbased approach is formally defined as follows:
Definition 5
(Expectationbased approach) Given an aggregate graph query estimation task, the expectationbased approach estimates Q as an expectation over the joint distribution of the unobserved node labels \(C_{u}\), i.e., estimate \(\hat{Q} = E_{p(\hat{C}_{u}G,C_{o})}[Q(G, C_{o}, \hat{C}_{u})]\).
AGQs can be computed as an expectation using approaches that explicitly model and perform inference on the joint distribution over the unobserved node labels. Nonprobabilistic GNNs such as GCN and GAT do not model the joint distribution and cannot be used to compute the expected value. SRL approaches such as PSL and MLN and probabilistic GNNs such as GMNNs and ExpressGNNs model the joint distribution explicitly. However, computing the expectation analytically for these approaches is challenging due the intractability of the integration in the expectation. The expectation can be approximated using Monte Carlo methods by sampling from the distribution.
To make the inference tractable, approaches such as GMNN and ExpressGNN replace the joint distribution with a meanfield variational distribution. The meanfield approximation breaks dependencies between the node labels in the joint distribution. As an example, (Pham et al., 2017) use an iterative approach to estimate the joint distribution. The final layer of the GNN estimates the marginal node label probabilities using the labels of its neighbors from the previous iteration. Sampling from each node’s marginal distribution independently or from a meanfield distribution results in samples with limited dependence between adjacent node labels. This makes computing expectation of the AGQs using Monte Carlo approximation challenging for these approaches.
6 Analysis of the estimation approaches
In the previous section, we proposed two approaches to estimate the AGQs. In this section, we analyze the two approaches by estimating the value of the AGQ introduced in Example 3 on a graph consisting of two nodes. We use stochastic block models (SBMs) (Holland et al., 1983; Bui et al., 1987; Abbe, 2018) as a generative model for the graph. SBMs are a popular class of generative models used extensively in statistics, physics, and network analysis. SBMs take as input the number of nodes n, a K dimensional vector (\(\gamma\)), where \(\gamma _{k} > 0\) and \(\sum _{k=1}^{K} \gamma _{k} = 1\), representing the fraction of nodes that belong to category k, and a \(K \times K\) symmetric matrix (\(\Pi\)) whose elements \(\Pi _{k_{1}k_{2}}\) represent the probability of edge between two nodes belonging to categories \(k_{1},k_{2}\). We assume that at least one of the \(\Pi _{k_{1}k_{2}}\) where \(k_{1} \ne k_{2}\) is nonzero, i.e., there is a nonzero probability of observing an edge across nodes belonging to different categories.
The SBM generative process for a graph \(G = (V, \mathcal {E})\) with node labels C is:
Consider a graph G with two nodes i, j connected by an edge \(e_{ij}\). The joint distribution for the node labels \(c_{i}, c_{j}\), under the SBM, is given by:
We now show that even for the simple aggregate graph query introduced earlier that counts the number of adjacent nodes belonging to the same category, the point estimate approach leads to large errors.
Theorem 1
For a graph G generated using SBM with two nodes i, j and an edge between them, the point estimate approach cannot minimizes the expected mean squared error for the AGQ \(Q = Count_{(i,j)}(\{\forall _{(i,j) \in V \times V} (e_{ij} \wedge c_{i}=c_{j})\})\)
Proof
The expected MSE for Q is given by \(E[(Q  \hat{Q})^2]\). We know that, expected MSE is minimized when \(\hat{Q} = E[Q]\), i.e.,
Since the query Q takes the value 1 when both nodes i, j have the same label and 0 otherwise, the expected value for the query Q, E[Q], is equal to the probability of i, j having the same node label. Thus E[Q] is given by:
Since \(\sum _{k_{1} \in C} \sum _{k_{2} \in C} \gamma _{k_{1}}\gamma _{k_{2}}\Pi _{k_{1}k_{2}} = 1\) and at least one of the terms \(\gamma _{k_{1}}\gamma _{k_{2}}\Pi _{k_{1}k_{2}} \ne 0\) when \(i \ne j\), \(\sum _{k \in C} \gamma _{k}^2\Pi _{kk}\) lies strictly between 0 and 1. Thus \(0< E[Q] < 1\).
The point estimate approach imputes labels for the nodes i, j and estimates \(\hat{Q}\) to be 1 if the imputed values i, j belong to the same category and 0 otherwise. Since the point estimate approach estimates \(\hat{Q}\) to be either 0 or 1, no pointestimate approach can minimize the expected MSE. \(\square\)
The above theorem shows that even for simple queries, the point estimate approach leads to suboptimal estimation. We show in the empirical evaluation that this is true also for more complex queries on larger graphs. Further, from Eq. 6, we know that an optimal estimate can be obtained using an expectationbased approach which directly computes the expectation of AGQs under the joint distribution.
7 Expectationbased approach for PSL
In the previous section, we showed that point estimate approaches do not obtain optimal estimates. Better estimates of AGQs can be obtained by computing the expectation of AGQs over the joint distribution. Computing the expectation analytically for SRL approaches may not always be possible due the intractability of the integration in the expectation. One way to overcome this problem is to use Monte Carlo methods to approximate the expectation by sampling from the distribution. The expectation can be approximated as follows:
where S is the number of samples and \(C_{u(j)}\) are samples drawn from the distribution \(p(C_{u}  G, C_{o})\).
Gibbs sampling (Gilks et al., 1995) is a type of MCMC sampling approach that generates samples from the joint distribution by iteratively sampling from the conditional distribution of each RV. For MLNs, where conditional distributions follow a binomial distribution, approaches such as MCSAT have been proposed (Poon & Domingos, 2006) that combine MCMC and satisfiability.
In PSL the unobserved node labels \(C_u\) are modeled as unobserved RVs \(Y_{0:m}\) where m is the number of nodes with unobserved labels. The conditional distribution for a RV \(y_{i} \in Y\) conditioned on all other variables X, \(Y_{i}\) is given by:
where \(N_{i}\) is the number of groundings in which variable \(y_{i}\) participates. The above distribution neither corresponds to a standard named distribution nor has a form amenable to techniques such as inversion sampling. Hence, it is nontrivial to generate samples from the conditional distributions of PSL.
To address this challenge, unlike a previous hitandrun based sampling approach (Broecheler & Getoor, 2010), we propose a simple but effective approach for samping from the joint distribution. We overcome the challenge of sampling from the conditional by incorporating a single step of a Metropolis algorithm within the Gibbs sampler [also called MetropoliswithinGibbs (Gilks et al., 1995)]. The algorithm for our proposed approach (MIG sampler) is given in Algorithm 1. For each RV \(y_{i}\), we first sample a new value \(y_{i}'\) from a uniform distribution Unif(0, 1) and compute the acceptance ratio \(\alpha\) given by:
We then accept the new value \(y_{i}'\), as a sample from the conditional with a probability proportional to \(\alpha\). We ignore the first b samples as burnin. Further, for faster convergence we start the sampling from the MAP state of PSL.
8 Empirical evaluation
In this section we analyze the performance of SRL and GNNbased approaches on AGQs. We answer the following research questions:

RQ1: How does the performance of expectationbased approaches compare with point estimate approaches?

RQ2: How does the performance vary with the amount of labeled data?

RQ3: What is the tradeoff in performance between estimating aggregate graph queries and locally decomposable evaluation metrics such as accuracy?

RQ4: What is the runtime performance of these approaches?
8.1 Experimental setup and datasets
We consider three benchmark citation datasets for node classification: Cora, Pubmed and Citeseer (Sen et al., 2008). The nodes correspond to documents, the edges correspond to citations, the attributes correspond to words in the document, and the categories correspond to areas of research. The statistics for these datasets are given in Table 1. We assume all the attributes \(a_{i}\) and citations \(\mathcal {E}\) are observed, while the categories C are only partially observed. We generate five folds consisting 500 nodes for training, 100 nodes for validation (600 observed node labels) and use the remaining as test nodes. All approaches are given access to observed node labels during training and metrics are evaluated on the test data^{Footnote 1}.
SRL approaches: For both MLNs and PSL, we extend the model defined in Bach et al. (2017) to incorporate node attributes. We use a bagofwords representation for the node attributes. We train a logistic regression model(LR) to predict the node labels using the bagofwords vectors. For each node, we consider the category with the highest probability as the LR prediction. Since LR does not need early stopping, we use all the observed node labels to train the model. We set the L2 regularizer weight to 0.001.
The model contains the following rules:
The predicate \(\textbf {HASCAT}(\texttt {A}, \texttt {Cat})\) is true if document \(\texttt {A}\) belongs to category \(\texttt {Cat}\) and predicate \(\textbf {LINK}(\texttt {A}, \texttt {B})\) is true if documents A and B have an citation link between them. The model incorporates the logistic regression predictions using the predicate \(\textbf {LR}(\texttt {A}, \texttt {Cat})\), which is true if LR predicts category Cat for document A. For MLNs, we include a functional constraint that prevents a document from having multiple categories set to true. For PSL, we include a highly weighted rule that states that the truth values across all categories must sum to 1 for a node. We learn the rule weights using MCSAT for MLN and maximum likelihood estimation for PSL using training and validation data.
The different SRLbased approaches that we consider are:

LR: We compute the AGQs using the predictions of logistic regression trained on the node attributes. This is a point estimate approach.

MLNMAP: This is a point estimate approach that computes the mode of the joint distribution defined by the MLN model. We use the MaxWalkSAT algorithm implemented in the Tuffy framework (Niu et al., 2011).

MLNSAM: This is an expectationbased approach that estimates the AGQ as an expectation over the distribution defined by the MLN model. We generate 1000 samples using the MCSAT algorithm, discard the first 500 samples as burnin samples and randomly choose 100 samples from the 500 (to ensure minimal correlation) and use Monte Carlo approximation to compute AGQs.

PSLMAP: This is a point estimate approach that computes the mode of the distribution defined by the PSL model. We use the ADMM algorithm implemented in the PSL framework (Bach et al., 2017).

PSLSAM: This is an expectationbased approach that estimates the AGQs as an expectation over the distribution defined by the PSL model. Similar to MLNSAM, we generate 1000 samples are generated using the proposed MIGsampler introduced in Algorithm 1, discard the first 500 samples as burnin samples and randomly choose 100 samples from the 500 (to ensure minimal correlation) and use Monte Carlo approximation to compute AGQs.
GNN based approaches: These are point estimate approaches that use the node representations to infer node labels. These models are trained using the training and validation data where the validation data is used to perform earlystopping. We used the code provided by the authors of the respective papers. For all three approaches we performed hyperparameter tuning and found that hyperparameters provided by authors performed best. The different GNNbased approaches we consider are:

GCN: This approach uses the representation computed using a graph convolutional network (Kipf & Welling, 2017).

GAT: This approach uses the representation computed using a graph attention network (Veličković et al., 2018).

GMNN: This approach uses the representation computed using a Markov neural network introduced recently (Qu et al., 2019).
Metric: In Sects. 8.2 and 8.3, we evaluate the performance on the AGQs (Q0 to Q5) using the relative query error (QE) and in Subsection 8.4 we evaluate the categorical accuracy (Acc) and homophily error. QE is computed using: \(QE = \frac{\hat{Q}Q}{Q}\) where Q is the true value of the query and \(\hat{Q}\) is the predicted value. We evaluate the overall performance of each method by computing the average QE over all queries denoted by AQE. For homophily error we use the homophily measure H defined in Dandekar et al. (2012) and compute error similar to QE by computing the absolute difference w.r.t. true H computed using the true labels. Homophily measure H is given by \(H = \frac{e\in S}{e \in NS} = \frac{Q1}{Q2}\) where S and NS as sets of edges such that the nodes have the same category and not the same category, respectively. All reported metrics are averaged across five folds.
8.2 Performance on AGQs
In this section we answer RQ1 by computing the QE for the AGQs proposed in Sect. 4. The QE and AQE for all datasets are shown in Table 2. We observe that PSLSAM has the lowest or the second lowest error across most of the nondecomposable queries \((Q1Q5)\). GNNs perfrom better on accuracy (Q0) which is a locally decomposable query. In Citeseer, although LR performs worse on locally decomposable AGQs such a Q0, it performs better on other AGQs. This is due to the sparse nature of the graph, where noncollective approaches perform better. Among collective approaches, PSLSAM outperforms all other approaches. GNNs have a high query errors for nondecomposable AGQs. This is consistent with our theoretical analysis.
Among the queries, we observe that Q1 and Q5 have lower error compared to the other queries for all the methods. Both Q1 and Q5 estimate node pairs that are adjacent and have the same category. These are easier to estimate as these nodes typically lie at the center of the category clusters. Since all the approaches propagate the similarity between the node neighbors, the models have a lower error on these queries. Queries Q2, Q3, and Q4 estimate nodes that have neighbors with different categories. These are nodes that lie in the boundary of the category clusters and are harder to infer. GNNbased approaches have very large errors for these queries, resulting in overall poor performance.
8.3 Effect of training data
To address RQ2, we create five variants of the datasets by varying the amount of training data available for each method from 200 to 600 with increments of 100. Figure 1 shows the performance of different methods on AGQs as we increase the number of training examples. We report the mean and the standard deviation of AQE across the five folds. We observe that on all three datasets expectationbased approaches have the lowest error. The average query error for logistic regression decreases sharply in the Citeseer data as we increase the size of the training data. We also observe that expectationbased approaches are more robust to the amount of training data when compared to pointestimate approaches.
8.4 Tradeoff between estimating AGQs and locally decomposable metrics
To answer RQ3, we compute the accuracy of the predicted node labels which is locally decomposable. Accuracy involves correctly estimating the node labels of each node individually. Estimating AGQs, on the other hand, requires correctly estimating the node labels for several adjacent nodes.
In Fig. 2, we plot the accuracy of the predicted node labels for all three datasets with different amount of training data. We observe that GNNs have a higher accuracy compared to SRL approaches. This is due to the sparsity of node attributes in these datasets which leads to inferior predictions by the logistic regression classifier. GNNs overcome this sparsity by aggregate features of the neighboring nodes. However, this implicitly assumes that a node’s neighbors have the same label. While this is true for most nodes, it is not always true. As a result, GNNs tend to perform poorly on AGQs which involve correctly estimating multiple node labels that may belong to different categories.
The error in the homophily between the estimated node labels and the true labels in shown in Fig. 2. We observe that GNNbased approaches have a large error when compared to SRL approaches. Further, we observed that by artificially modifying weight of first rule in PSL that propogates the node labels across the citation edges, the accuracy could be improved at the cost of poor AGQ estimates. This shows that there is a tradeoff between locally decomposable metrics such as accuracy and AGQs. While GNNbased approaches are good at estimating locally decomposable metrics they perform poorly when estimating AGQs. SRLbased approaches due to their flexibility in modeling can be altered to perform well on either of the two metrics.
8.5 Runtime comparisons
To answer RQ4, we recorded the runtimes of the different approaches and is given in Table 3. As expected, we observed that point estimate approaches are significantly faster compared to expectationbased approaches. This is not surprising as point estimates are computed using efficient optimization approaches. Among the GCN, GAT, GMNN, PSLMAP and MLNMAP, we observe that GMNN takes the least amount of time in Pubmed and Citeseer dataset and PSLMAP takes the least amount of time in Cora dataset. Among PSLSAM and MLNSAM, we observe that our proposed MIG sampler for PSL is faster than MLNSAM by a factor of two for Cora and three for Pubmed.
9 Conclusion and future work
In this paper, we motivate the practical need for aggregate graph queries (AGQs), and show that existing approaches which optimize for locally decomposable metrics such as accuracy neither perform well theoretically nor empirically. In order to compute the expectation under the joint distribution, we introduce a novel sampling approach, MIG, for PSL that is both effective and efficient. We perform an extensive evaluation of SRL and GNN approaches for answering AGQs. Through our experiments we show that SRL methods can get up to 50 times less error compared to GNNs and that our proposed MIG sampler is up to three times faster than other SRL sampling approaches. An interesting future direction is to combine GNN approaches with SRL models that can learn node representations and also infer a joint distribution over the unobserved data. Extending this analysis for networks with missing edges and nodes is another interesting line of future work.
Notes
The code for our approach can be found at https://github.com/linqs/embarmlj21.
References
Abbe, E. (2018). Community detection and stochastic block models: Recent developments. Journal of Machine Learning Research, 18, 1–86.
Bach, S. H., Broecheler, M., Huang, B., & Getoor, L. (2017). Hingeloss Markov random fields and probabilistic soft logic. Journal of Machine Learning Research, 18, 1–67.
Broecheler, M., & Getoor, L. (2010). Computing marginal distributions over continuous Markov networks for statistical relational learning. In NeuRIPS.
Bui, T. N., Chaudhuri, S., Leighton, F. T., & Sipser, M. (1987). Graph bisection algorithms with good average case behavior. Combinatorica, 7, 171–191.
Chin, A., Chen, Y., Altenburger, K. M., & Ugander, J. (2019). Decoupled smoothing on graphs. In WWW.
Cook, D. J., & Holder, L. B. (2006). Mining graph data. Wiley.
Dandekar, P., Goel, A., & Lee, D. (2012). Biased assimilation, homophily, and the dynamics of polarization. In WINE.
De Raedt, L., Dumančić, S., Manhaeve, R., & Marra, G. (2020). From statistical relational to neurosymbolic artificial intelligence. In: IJCAI.
De Raedt, L., Kersting, K., & Natarajan, S. (2016). Statistical relational artificial intelligence: Logic, probability, and computation. Morgan & Claypool Publishers.
De Raedt, L., & Kimmig, A. (2015). Probabilistic (logic) programming concepts. Machine Learning, 100, 5–47.
Dunne, C., & Shneiderman, B. (2013). Motif simplification: Improving network visualization readability with fan, connector, and clique glyphs. In CHI.
Getoor, L., & Taskar, B. (2007). Introduction to Statistical Relational Learning. The MIT Press.
Gilks, W. R., Richardson, S., & Spiegelhalter, D. (1995). Markov chain Monte Carlo in practice. Chapman and Hall/CRC.
Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., & Dahl, G. E. (2017). Neural message passing for quantum chemistry. In ICML.
Hamilton, W., Ying, Z., & Leskovec, J. (2017). Inductive representation learning on large graphs. In NeuRIPS.
Harsha Vardhan, L., Jia, G., & Kok, S. (2020). Probabilistic logic graph attention networks for reasoning. In WWW companion.
Holland, P. W., Laskey, K. B., & Leinhardt, S. (1983). Stochastic blockmodels: First steps. Social Networks, 5, 109–137.
Kersting, K., & De Raedt, L. (2007). Bayesian logic programming: Theory and tool. In L. Getoor & B. Taskar (Eds.), An introduction to Statistical Relational Learning. MIT Press.
Kipf, T. N., & Welling, M. (2017). Kipf and Max Welling. In ICLR: Semisupervised classification with graph convolutional networks.
Liu, Y., Safavi, T., Dighe, A., & Koutra, D. (2018). Graph summarization methods and applications: A survey. ACM Computing Surveys (CSUR), 51, 62–96.
Muggleton, S., et al. (1996). Stochastic logic programs. Advances in Inductive Logic Programming, 32, 254–264.
Neal, R. M., & Hinton, G. E. (1998). A view of the Em algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan (Ed.), Learning in graphical models (pp. 355–368). Springer.
Neville, J., & Jensen, D. (2002). Iterative classification in relational data. In AAAI workshop on learning statistical models from relational data.
Niu, F., Ré, C., Doan, A. H., & Shavlik, J. (2011). Tuffy: Scaling up statistical inference in Markov logic networks using an rdbms. International Journal on Very Large Data Bases, 4, 373–384.
Pham, T., Tran, T., Phung, D., & Venkatesh, S. (2017). Column networks for collective classification. In AAAI.
Poon, H., & Domingos, P. (2006). Sound and efficient inference with probabilistic and deterministic dependencies. In AAAI.
Qu, M., Bengio, Y., & Tang, J. (2019). Gmnn: Graph Markov neural networks. In ICML.
Qu, M., & Tang, J. (2019). Probabilistic logic neural networks for reasoning. In NeuRIPS.
Qiang, Q., Liu, S., Jensen, C. S., Zhu, F., & Faloutsos, C. (2014). Interestingnessdriven diffusion process summarization in dynamic networks. In ECML.
Rajaraman, A., & Ullman, J. D. (2011). Mining of massive datasets. Cambridge University Press.
Richardson, M., & Domingos, P. (2006). Markov logic networks. Machine Learning, 62, 107–136.
Schlichtkrull, M., Kipf, T. N., Bloem, P., Van Den Berg, R., Titov, I., & Welling, M. (2018). Modeling relational data with graph convolutional networks. In ESWC
Scott, J. (1988). Social network analysis. Sociology, 22, 109–127.
Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., & EliassiRad, T. (2008). Collective classification in network data. AI Magazine, 29, 93–93.
Shi, L., Tong, H., Tang, J., & Lin, C. (2015). Vegas: Visual influence graph summarization on citation networks. IEEE Transactions on Knowledge and Data Engineering, 27, 3417–3431.
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., & Bengio, Y. (2018). Graph attention networks. In: ICLR.
Venugopal, D., Sarkhel, S., & Gogate, V. (2016). Magician: Scalable inference and learning in Markov logic using approximate symmetries. Technical report, UofM, Memphis.
Wasserman, S., & Faust, K. (1994). Social Network Analysis: Methods and Applications. Cambridge University Press.
Wu, Y., Zhong, Z., Xiong, W., & Jing, N. (2014). Graph summarization for attributed graphs. In ISEEE.
Zhang, Y., Chen, X., Yang, Y., Ramamurthy, A., Li, B., Qi, Y., & Song, L. (2020). Efficient probabilistic logic reasoning with graph neural networks.
Acknowledgements
This work was partially supported by the National Science Foundation grants CCF1740850, CCF2023495 and IIS1703331, AFRL, IFDS DMS2023495, and the Defense Advanced Research Projects Agency.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Editors: Nikos Katzouris, Alexander Artikis, Luc De Raedt, Artur d’Avila Garcez, Sebastijan Dumančić, Ute Schmid, Jay Pujara.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Embar, V., Srinivasan, S. & Getoor, L. A comparison of statistical relational learning and graph neural networks for aggregate graph queries. Mach Learn 110, 1847–1866 (2021). https://doi.org/10.1007/s10994021060075
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10994021060075