1 Introduction

Networks are used in many scientific disciplines to represent interactions among objects of interest. For example, in the social sciences, a network typically represents social ties between actors. In biological sciences, a network can represent interactions between proteins.

The stochastic block model (SBM) (Holland et al. 1983; Snijders and Nowicki 1997) is a popular generative model which partitions vertices into latent classes. Conditional on the latent class allocations, the connection probability between two vertices depends only on the latent classes to which the two vertices belong. Many extensions of the class SBM have been proposed which include the degree correlated SBM (Karrer and Newman 2011; Peng and Carvalho 2016), mixed membership SBM (Airoldi et al. 2008) and overlapping SBM (Latouche et al. 2011).

The SBM and many of its variants are usually restricted to Bernoulli networks. However, many binary networks are produced after applying a threshold to a weighted relationship (Ghasemian et al. 2016) which results in the loss of potentially valuable information. Although most of the literature has focused on binary networks, there is a growing interest in weighted graphs (Barrat et al. 2004; Newman 2004; Peixoto 2018).

In particular, a number of clustering methods have been proposed for weighted graphs including algorithm based and model based methods. Algorithm based methods for clustering of weighted graphs can be further divided into two classes: algorithms which do not explicitly optimize any criteria (Pons and Latapy 2005; von Luxburg 2007) and those directly optimize a criterion (Clauset et al. 2004; Stouffer and Bascompte 2011). Model based methods (Mariadassou et al. 2010; Aicher et al. 2013, 2015; Ludkin 2020) attempt to take into account the random variability in the data. A recent review of graph clustering methods is given by (Leger et al. 2014).

Mariadassou et al. (2010) presents a Poisson mixture random graph model for integer valued networks and proposes a variational inference approach for parameter estimation. The model can account for covariates via a regression model. In Zanghi et al. (2010), a mixture modelling framework is considered for random graphs with discrete or continuous edges. In particular, the edge distribution is assumed to follow an exponential family distribution. Aicher et al. (2013) proposed a general class of weighted stochastic block model for dense graphs where edge weights are assumed to be generated according to an exponential family distribution. In particular, their construction produces complete graphs, in which every pair of vertices is connected by some real-valued weight. Since most real-world networks are sparse, the constructed model cannot be applied directly. To address this shortcoming, Aicher et al. (2015) extends the work of Aicher et al. (2013) and models the edge existence using a Bernoulli distribution and the edge weights using an exponential family distribution. The contributions of edge-existence distribution and edge-weight distribution in the likelihood function are then combined via a simple tuning parameter. However, their construction does not result in a generative model and it is not obvious how to simulate network observations from the proposed model. More recently, Ludkin (2020) presents a generalization of the SBM which allows artbitrary edge weight distributions and proposes a reversible jump Markov chain Monte Carlo sampler for estimating the parameters and the number of blocks. However, the use of continuous probability distribution to model the edge weights implies that the resulting graph is complete whereby every edge is present. This assumption is unrealistic for many applications whereby a certain proportion of the real-valued edges is 0. Haj et al. (2020) presents a binomial SBM for weighted graphs and proposes a variational expectation maximization algorithm for parameter estimation.

In this paper, we propose a weighted Stochastic Block model (WSBM) with gamma weights which aims to capture the information of weights directly using a generative model. Both maximum likelihood estimation and variational methods are considered for parameter estimation where consistency results are derived. We also address the problem of choosing the number of classes using the Integrated Completed Likelihood (ICL) criteria (Biernacki et al. 2000). The proposed models and inference methodology are applied to an illustrative data set.

2 Model specification

In this section, we present the weighted stochastic block model in detail and introduce the main notations and assumptions.

We let \(\varOmega = ({\mathcal {V}}, {\mathcal {X}}, {\mathcal {Y}})\) denote the set of directed weighted random graphs where \({\mathcal {V}} = \mathbb {N}\) is the set of countable vertices, \( {{\mathcal {X}}} = \{ 0, 1 \}^{\mathbb {N} \times \mathbb {N}}\) is the set of edge-existence adjacency matrix, and \( {\mathcal {Y}} = \mathbb {R}_{+}^{ \mathbb {N} \times \mathbb {N} } \) is the set of weighted adjacency matrix. Given a random adjacency matrix \(X = \{ X_{ij} \}_{i,j \in \mathbb {N}}\), \(X_{ij}=1\) if an edge exists from vertex i to vertex j and \(X_{ij}=0\) otherwise. The associated weighted random adjacency matrix is given by: for \(i \ne j\), if \(X_{ij}=1\), \(Y_{ij} > 0\), and \(Y_{ij}=0\) otherwise. Let \(\mathbb {P}\) be a probability measure on \(\varOmega \).

2.1 Generative model

We now describe the procedure of generating a sample of random graph (VXY) with n vertices from \( \varOmega \).

  • Let \(Z_{[n]}=(Z_{1}, \ldots , Z_{n})\) be the vector of latent block allocations for the vertices, and set \(\theta =(\theta _{1}, \ldots , \theta _{Q})\) with \(\sum _{q} \theta _{q}=1\). For each vertex \(v_{i}\), draw its block label \(Z_{i} \in \{1, \ldots , Q\}\) from a multinomial distribution

    $$\begin{aligned} Z_{i} \sim {{\mathcal {M}}}(1; \theta _{1}, \ldots , \theta _{Q}) \text{. } \end{aligned}$$
  • Let \(\pi = (\pi _{ql})_{q,l=1}^{Q}\) be a \(Q \times Q\) matrix with entries in [0, 1]. Conditional on the block allocations \(Z_{[n]}\), the entries \(X_{ij}\) for \(i \ne j\) of the edge-existence adjacency matrix \(X_{[n]}\) is generated from independently a Bernoulli distribution

    $$\begin{aligned} X_{ij} | Z_{i}=q, Z_{j}= l \sim {{\mathcal {B}}}(\pi _{ql}) \text{. } \end{aligned}$$
  • Let \(\alpha = (\alpha _{ql})_{q,l=1}^{Q} \) and \( \beta =(\beta _{ql})_{q,l=1}^{Q}\) be \(Q \times Q\) matrices with entries taking values in the positive reals. Conditional on the latent block allocations \(Z_{[n]}\) and edge-existence adjacency matrix X, the weighted adjacency matrix \(Y_{[n]}\) is generated independently from

    $$\begin{aligned} Y_{ij} | X_{ij} = 1, Z_{i} = q, Z_{j} = l\sim & {} \text{ Ga }(\alpha _{ql}, \beta _{ql}) \text{, } \\ Y_{ij} | X_{ij} = 0, Z_{i} = q, Z_{j} = l\sim & {} \delta _{\{0\}} \text{, } \end{aligned}$$

    where \(\text{ Ga }(\cdot ,\cdot )\) denotes the gamma distribution and \( \delta _{\{\cdot \}}\) is the Dirac delta function.

The generative framework described above is a straightforward extension of the binary stochastic block model whereby a positive weight is generated according to a gamma distribution for each edge. In particular, (XZ) is a realization of the binary directed SBM. The gamma distribution is chosen due to its flexibility in the sense that, depending on the value of its shape parameter, it can represent distributions of different shapes.

The log-likelihood of the observations \(X_{[n]}\) and \(Y_{[n]}\) is given by

$$\begin{aligned} {{\mathcal {L}}}_{2}(Y_{[n]}, X_{[n]}; \theta , \pi , \alpha , \beta ) = \log \Big ( \sum _{z_{[n]}} e^{ {{\mathcal {L}}}_{1}(Y_{[n]}, X_{[n]}; z_{[n]}, \pi , \alpha , \beta ) } \mathbb {P}\{ Z_{[n]} = z_{[n]} \} \Big ) , \end{aligned}$$
(1)

where the sum is over all possible latent block allocations, \(\mathbb {P} \{ Z_{[n]} = z_{[n]} \} = \prod _{i=1}^{n} \theta _{z_{i}} \) is the probability of latent block allocation \(z_{[n]}\), and

$$\begin{aligned}&{{\mathcal {L}}}_{1}(Y_{[n]}, X_{[n]}; z_{[n]}, \pi , \alpha , \beta )&\nonumber \\&\quad = \sum _{i \ne j} \Big [ X_{ij} \big \{ \log \pi _{z_{i}, z_{j}} + \log f(Y_{ij}; \alpha _{z_{i},z_{j}}, \beta _{z_{i},z_{j}} ) \big \} + (1-X_{ij}) \log (1-\pi _{z_{i},z_{j}}) \Big ]&\nonumber \\&\quad = \sum _{i \ne j} \Big \{X_{ij} \log \pi _{z_{i}, z_{j}} + (1-X_{ij}) \log (1-\pi _{z_{i},z_{j}}) \Big \} \nonumber \\&\qquad + X_{ij} \Big \{ \alpha _{z_{i},z_{j}} \log \beta _{z_{i},z_{j}} + (\alpha _{z_{i},z_{j}} -1) \log Y_{ij} - \beta _{z_{i},z_{j}} Y_{ij} - \log \varGamma (\alpha _{z_{i},z_{j}}) \Big \}&\end{aligned}$$
(2)

is the complete data log-likelihood, where \( f(\cdot ;a,b)\) is the gamma probability density function with shape parameter a and rate parameter b,

2.2 Assumptions

We present several assumptions needed for identifiability and consistency of maximum likelihood estimates. The following four assumptions were presented in Celisse et al. (2012) and are needed in this paper.

Assumption 1

For every \(q \ne q^{'}\), there exists \(l \in \{1, \ldots , Q\} \) such that

$$\begin{aligned} \pi _{q,l} \ne \pi _{q^{'},l} \text{ or } \pi _{l,q} \ne \pi _{l,q^{'}} . \end{aligned}$$

Assumption 2

There exists \( \zeta \in (0,1)\) such that for all \( (q,l) \in \{1, \ldots , Q \}^{2} \)

$$\begin{aligned} \pi _{ql} \in [\zeta , 1-\zeta ] . \end{aligned}$$

Assumption 3

There exists \(0< \gamma < 1/Q \) such that for all \( q \in \{1, \ldots , Q\}\),

$$\begin{aligned} \theta _{q} \in [\gamma , 1-\gamma ] . \end{aligned}$$

Assumption 4

There exists \(0< \gamma < 1/Q \) and \( n_{0} \in \mathbb {N}^{*} \) such that for all \( q \in \{1, \ldots , Q\} \), for all \( n \ge n_{0} \),

$$\begin{aligned} \frac{ N_{q}(z^{*}_{[n]} ) }{n } \ge \gamma , \end{aligned}$$

where \( N_{q}(z^{*}_{[n]} ) = | \{ 1 \le i \le n : z^{*}_{i} = q | \) and \(z^{*}_{[n]}\) is any realized block allocation under the WSBM.

(A1) requires that no two classes have the same connectivity probabilities. If this assumption is violated, the resulting model has too many classes and is non-identifiable. (A2) requires that the connectivity probability between any two classes strictly lies within a closed subset of the unit interval. Note that this assumption is slightly more restrictive compared to assumption 2 of Celisse et al. (2012) in that we do not consider the boundary cases where \(\pi _{q,l} \in \{0,1\}\). The boundary cases require special treatment and are not pursued in this paper. (A3) ensures that no class is empty with high probability while (A4) is the empirical version of (A3). We note that (A4) is satisfied asymptotically under the generative framework in Sect. 2.1 since the block allocations are generated according to a multinomial distribution.

In addition to the four assumptions above, we also have the following constraints on the gamma parameters.

Assumption 5

For every \(q \ne q^{'}\), there exists \( l \in \{1, \ldots , Q \}\) such that

$$\begin{aligned} ( \alpha _{q,l}, \beta _{q,l}) \ne (\alpha _{q^{'},l}, \beta _{q^{'},l}) \text{ or } (\alpha _{l,q}, \beta _{l, q}) \ne (\alpha _{l,q^{'}},\beta _{l,q^{'}}) \end{aligned}$$

(A5) requires that no two classes have the same weight distribution. This assumption is the exact counterpart of (A1).

The log-likelihood function (1) contains degeneracies that prevent the direct estimation of parameters \(\theta , \pi , \alpha , \beta \). To see this, we note that the probability density function of a gamma distribution \(\text{ Ga }(a,b)\) is given by

$$\begin{aligned} f(y;a,b) = \frac{ y^{a-1} \exp (-b y) b^{a} }{ \varGamma (a) } \text{. } \end{aligned}$$

By Stirling’s formula, we have

$$\begin{aligned} \varGamma (a) = \sqrt{2 \pi } a^{a-1/2} \exp (-a) (1 + {{\mathcal {O}}}(a^{-1}) ) \text{. } \end{aligned}$$

Setting \(y = ab\),

$$\begin{aligned} f(y;a,b) = \frac{ y^{a-1} \exp (-a) (a/y)^{a} }{ \varGamma (a) } = \frac{1}{ \sqrt{2 \pi } y} \frac{ \sqrt{a} }{ 1 + {{\mathcal {O}}}(a^{-1}) } \text{. } \end{aligned}$$

Therefore, letting \(a \rightarrow \infty \) while keeping \(ab=y\), we have \(f(y;a,b) \rightarrow \infty \). One can therefore show that the log-likelihood function is unbounded above. To avoid likelihood degeneracy, we compactify the parameter space. That is, we restrict the parameter space to a compact subset which contains the true paraemters. Therefore, we have the following assumption.

Assumption 6

There exists \( 0< \alpha _{c}< \alpha _{C} < \infty \) and \( 0< \beta _{c}< \beta _{C} < \infty \) such that for all \( (q,l) \in \{ 1, \ldots , Q \}\),

$$\begin{aligned} \alpha _{c} \le \alpha _{ql} \le \alpha _{C} \text{, } \beta _{c} \le \beta _{ql} \le \beta _{C} \text{. } \end{aligned}$$

With this assumption, it is easy to see that the log-likelihood function is bounded for any sample size.

2.3 Identifiability

Sufficient conditions for identifiability of binary SBM with two classes have been first obtained by Allman et al. (2009). Celisse et al. (2012) show that the SBM parameters are identifiable up to a permutation of class labels under the conditions that \( \pi \theta \) has distinct coordinates and \(n \ge 2Q\). The condition on \(\pi \theta \) is mild since the set of vectors violating this assumption has Lebesgue measure 0. The identifiability of weighted SBM is more challenging where the only known result (Section 4 of Allman et al. (2011)) requires all entries of \((\pi , \alpha , \beta )\) to be distinct. We note that the assumptions in the previous section are not necessarily sufficient but are necessary to ensure that the identifiability of the parameters.

3 Asymptotic recovery of class labels

We study the posterior probability distribution of the class labels \(Z_{[n]}\) given the random adjacency matrix \(X_{[n]}\) and weight matrix \(Y_{[n]}\), which is denoted by \(\mathbb {P}(Z_{[n]}|X_{[n]},Y_{[n]})\). Since \(X_{[n]}\) and \(Y_{[n]}\) are random, \(\mathbb {P}(Z_{[n]}|X_{[n]},Y_{[n]})\) is also random.

Let \( P^{*}(X_{[n]},Y_{[n]}) := \mathbb {P}(X_{[n]},Y_{[n]}|Z_{[n]}=z_{[n]}^{*}) \) be the true conditional distribution of \((X_{[n]},Y_{[n]})\) which depends on the true parameters \((\theta ^{*},\pi ^{*},\alpha ^{*},\beta ^{*})\). We study the convergence rate of \(\mathbb {P}(Z_{[n]}|X_{[n]},Y_{[n]})\) towards 1 with respect to \(P^{*}\).

The matrices \(\pi , \alpha , \beta \) are permutation-invariant if one permutes both its rows and columns according to some permutation \(\sigma : \{1,\ldots ,Q\} \rightarrow \{1,\ldots ,Q\}\). Let \(\pi ^{\sigma }, \alpha ^{\sigma }, \beta ^{\sigma }\) be the matrices defined by

$$\begin{aligned} \pi ^{\sigma }_{ql} = \pi _{\sigma (q),\sigma (l)} \text{, } \alpha ^{\sigma }_{ql} =\alpha _{\sigma (q), \sigma (l)} \text{, } \beta ^{\sigma }_{ql} = \beta _{\sigma (q), \sigma (l)} \end{aligned}$$

and define the set

$$\begin{aligned} \varSigma = \{ \sigma : \{1,\ldots ,Q\} \rightarrow \{1,\ldots ,Q\} | \pi ^{\sigma } = \pi , \alpha ^{\sigma } = \alpha , \beta ^{\sigma }=\beta \} \text{. } \end{aligned}$$

Two vectors of class labels z and \(z^{'}\) are equivalent if there exists \(\sigma \in \varSigma \) such that \( z^{'}_{i} = \sigma (z_{i}) \text{, } \) for all i. We let [z] denote the equivalence class of z and will omit the square-brackets in the equivalence class notation as long as no confusion arises.

The following result extends Theorem 3.1 of Celisse et al. (2012) to the case of WSBM.

Theorem 1

Under assumptions (A1)–(A6), for every \(t>0\),

$$\begin{aligned} P^{*} \Bigg [ \sum _{ [z_{[n]}] \ne [z^{*}_{[n]}] } \frac{ \mathbb {P}( [ Z_{[n]} ] = [ z_{[n]} ] | X_{[n]}, Y_{[n]} )}{ \mathbb {P}( [ Z_{[n]} ] = [ z^{*}_{[n]} ] | X_{[n]}, Y_{[n]} ) } > t \Bigg ] = {{\mathcal {O}}}(n e^{- \kappa n}) \end{aligned}$$

uniformly with respect to \(z^{*}\), and for some \( \kappa > 0\) depending only on \(\pi ^{*}, \alpha ^{*}, \beta ^{*}\) but not on \(z^{*}\). Here \(z^{*} = (z_i^{*})_{i=1}^{\infty }\) with \(z_i^{*} \in \{1, \ldots , Q\}\). Furthermore, \(P^{*}\) can be replaced by \(\mathbb {P}\) under assumptions (A1)–(A3) and (A5)–(A6).

4 Maximum likelihood estimation of WSBM parameters

For the binary SBM, consistency of parameter estimation have been shown for profile likelihood maximization (Bickel and Chen 2009), spectral clustering method (Rohe et al. 2011), method of moments approach (Bickel et al. 2011), method based on empirical degrees (Channarond et al. 2012), and others (Choi et al. 2012). Consistency of both maximum likelihood estimation and variational approximation method are established in Celisse et al. (2012) and Bickel et al. (2013) where asymptotic normality is also established in Bickel et al. (2013). Abbe (2018) reviews recent development in the stochastic block model and community detections.

Ambroise and Matias (2012) proposes a general class of sparse and weighted SBM where the edge distribution may exhibit any parametric form and studies the consistency and convergence rates of various estimators considered in their paper. However, their model requires the edge existence parameter to be constant across the graph, that is, \(\pi _{ql} = \pi , \quad \) for all ql, or \(\pi _{ql}\) can be modelled as \(\pi _{ql} = a_1 I_{q =l} + a_2 I_{q \ne l}\) where \(I_{\cdot }\) is the indicator function and \(a_1 \ne a_2\). Furthermore, they also assume that conditional on the block assignments, the edge weight \(Y_{ij}|X_{i}=q, X_{j}=l\) is modelled using a parametric distribution with a single parameter \(\theta _{ql}\). They further impose the restriction that

$$\begin{aligned} \theta _{ql} = {\left\{ \begin{array}{ll} \theta _{in} \quad \text{ if } q = l, \\ \theta _{out} \quad \text{ if } q \ne l. \end{array}\right. } \end{aligned}$$

These assumptions are more restrictive than those imposed in this paper. Jog and Loh (2015) studies the problem of characterizing the boundary between success and failure of MLE when edge weights are drawn from discrete distributions. More recently, Brault et al. (2020) studies the consistency and asymptotic normality of the MLE and variational estimators for the latent block model which is a generalization of the SBM. However, the model considered in Brault et al. (2020) is restricted to the dense setting and requires the observations in the data matrix to be modelled by univariate exponential family distributions.

This section addresses the consistency of the MLE of WSBM. In particular, we extend the results obtained in the pioneering paper of Celisse et al. (2012) to the case of weighted graphs. Our proof closely follows the proof of consistency of the MLE in Celisse et al. (2012). The MLE consistency proof of \( ( \pi , \alpha , \beta ) \) and \( \theta \) require different treatments since there are \(n(n-1)\) edges but only n vertices. The following result established the MLE consistency of \( (\pi , \alpha , \beta )\).

Theorem 2

Assume that assumptions (A1), (A2), (A3), (A5), (A6) hold. Let us define the MLE of \( ( \theta ^{*}, \pi ^{*}, \alpha ^{*}, \beta ^{*})\) by

$$\begin{aligned} (\hat{ \varvec{ \theta } }, \hat{ \pi }, \hat{ \alpha }, \hat{\beta }) := \arg \max _{\theta , \pi , \alpha , \beta } {{\mathcal {L}}}_{2}(Y_{[n]}, X_{[n]}; \theta , \pi , \alpha , \beta ) . \end{aligned}$$

Then for any metric \( d(\cdot ,\cdot ) \) on \( (\pi , \alpha , \beta ) \),

$$\begin{aligned} d( ( \hat{\pi }, \hat{\alpha }, \hat{\beta }), ( \pi ^{*}, \alpha ^{*}, \beta ^{*}) ) \xrightarrow [ n \rightarrow \infty ]{ \mathbb {P} } 0 . \end{aligned}$$

Under additional assumption on the rate of convergence of the estimators \( (\hat{\pi }, \hat{\alpha }, \hat{\beta }) \) of \( (\pi , \alpha , \beta )\), consistency of \( \hat{\theta } \) can be established.

Theorem 3

Let \( (\hat{\theta }, \hat{\pi }, \hat{\alpha }, \hat{\beta } ) \) denote the MLE of \( (\theta ^{*}, \pi ^{*}, \alpha ^{*}, \beta ^{*}) \) and assume that \( || \hat{\pi } - \pi ^{*} ||_{\infty } = o_{\mathbb {P}}( \sqrt{\log n}/n ) \), \( || \hat{\alpha } - \alpha ^{*} ||_{\infty } = o_{\mathbb {P}}( \sqrt{\log n}/n ) \), and \( || \hat{\beta } - \beta ^{*} ||_{\infty } = o_{\mathbb {P}}( \sqrt{\log n}/n ) \), then

$$\begin{aligned} d( \hat{\theta }, \theta ^{*}) \xrightarrow [n \rightarrow \infty ]{\mathbb {P}} 0 \end{aligned}$$

for any metric d in \(\mathbb {R}^{Q}\).

5 Variational estimators

Direct maximization of the log-likelihood function is intractable except for very small graphs since it involves a sum over \(Q^{n}\) terms. In practice, approximate algorithms such as Markov Chain Monte Carlo (MCMC) and variational inference algorithms are often used for parameter inference. For the SBM, both MCMC and variational inference approaches have been proposed (Snijders and Nowicki 1997; Daudin et al. 2008). Variational inference algorithms have also been developed for mixed membership SBM (Airoldi et al. 2008), overlapping SBM (Latouche et al. 2011), and the weighted SBM proposed in Aicher et al. (2013). This section develops a variational inference algorithm for the WSBM which can be considered a natural extension of the algorithm proposed in Daudin et al. (2008) for the SBM.

The variational method consists in approximating \(P(Z_{[n]}= \cdot | X_{[n]}, Y_{[n]})\) by a product of n multinomial distributions. Let \( {{\mathcal {D}}}_{n}\) denote a set of product multinomial distributions

$$\begin{aligned} {{\mathcal {D}}}_{n} = \Big \{ D_{\tau _{[n]}} = \prod _{i=1}^{n} {{\mathcal {M}}}(1; \tau _{i,1}, \ldots , \tau _{i,Q}) | \tau _{[n]} \in {{\mathcal {S}}}_{n} \Big \} \end{aligned}$$

where

$$\begin{aligned} {{\mathcal {S}}}_{n} = \Big \{ \tau _{[n]} = (\tau _{1}, \ldots , \tau _{n}) \in ([0,1]^{Q})^{n} | \text{ for } \text{ all } i, \tau _{i} = (\tau _{i,1}, \ldots , \tau _{i,Q}), \sum _{q=1}^{Q} \tau _{i,q} = 1 \Big \} \text{. } \end{aligned}$$

For any \(D_{\tau _{[n]}} \in {{\mathcal {D}}}_{n} \), the variational log-likelihood is defined by

$$\begin{aligned} {{\mathcal {J}}}(Y_{[n]},X_{[n]}; \tau _{[n]}, \theta , \pi , \alpha , \beta ) = {{\mathcal {L}}}_{2}(Y_{[n]}, X_{[n]}; \theta , \pi , \alpha , \beta ) - \mathbb {KL}(D_{\tau _{[n]}}, P(\cdot |X_{[n]}, Y_{[n]})) \text{. } \end{aligned}$$

Here \(\mathbb {KL}(\cdot ,\cdot )\) denotes the Kullback-Leibler divergence between two probability distributions, which is nonnegative. Therefore, \({{\mathcal {J}}}\) provides a lower bound on the log-likelihood function. We have that

$$\begin{aligned} {{\mathcal {J}}}(Y_{[n]},X_{[n]}; \tau _{[n]}, \theta , \pi , \alpha , \beta ) = \sum _{i \ne j} \sum _{q,l} \tau _{i,q} \tau _{jl} \Big ( \log \text{ b }(X_{ij}; \pi _{ql}) + X_{ij} \log f(Y_{ij}; \alpha _{ql}, \beta _{ql}) \Big ) \\ - \sum _{i} \sum _{q} \tau _{iq} ( \log \tau _{iq} - \log \theta _{q} ) \end{aligned}$$

where \(\text{ b }(\cdot ;p)\) denotes the probability mass function of a Bernoulli distribution with parameter p, and recall that \( f(\cdot ;\alpha _{ql},\beta _{ql}) \) denotes the density function of a gamma distribution \(\text{ Ga }(\alpha _{ql},\beta _{ql})\).

The variational algorithm works by iteratively maximizing the lower bound \({{\mathcal {J}}}\) with respect to the approximating distribution \( D_{\tau _{[n]}} \), and estimating the model parameters. Maximization of \({{\mathcal {J}}}\) with respect to \( D_{\tau _{[n]}} \) consists of solving

$$\begin{aligned} \hat{\tau }_{[n]} := \text{ argmax}_{\tau _{[n]}} {\mathcal {J}}(Y_{[n]},X_{[n]}; \tau _{[n]}, \theta , \pi , \alpha , \beta ) \text{, } \end{aligned}$$

where \(\theta , \pi , \alpha , \beta \) can be replaced by plug-in estimates. This has a closed-form solution given by

$$\begin{aligned} \hat{\tau }_{iq} \propto \theta _{q} \prod _{j \ne i} \prod _{l} \text{ b }(X_{ij};\pi _{ql})^{\hat{\tau }_{jl}} f(Y_{ij}; \alpha _{ql}, \beta _{ql})^{ \hat{\tau }_{jl} X_{ij} } \text{. } \end{aligned}$$
(3)

Conditional on \(\hat{\tau }_{[n]}\), the variational estimators of \( (\theta , \pi , \alpha , \beta )\) are found by solving,

$$\begin{aligned} (\tilde{\theta }, \tilde{\pi }, \tilde{\alpha }, \tilde{\beta }) = \text{ argmax}_{\theta , \pi , \alpha , \beta } {\mathcal {J}}(Y_{[n]},X_{[n]}; \tau _{[n]}, \theta , \pi , \alpha , \beta ) . \end{aligned}$$

Closed-form updates for \(\tilde{\theta }\) and \(\tilde{\pi }\) exist and are given by

$$\begin{aligned} \tilde{\theta }_{q}&= \frac{1}{n} \sum _{i} \hat{ \tau }_{iq} \end{aligned}$$
(4)
$$\begin{aligned} \tilde{\pi }_{ql}&= \frac{ \sum _{i \ne j} \hat{\tau }_{iq} \hat{\tau }_{jl} X_{ij} }{ \sum _{i \ne j} \hat{\tau }_{iq} \hat{\tau }_{jl} } \text{. } \end{aligned}$$
(5)

On the other hand, updates for \(\tilde{\alpha }\) and \(\tilde{\beta }\) do not have a closed form since the maximum likelihood estimators of the two parameters of a gamma distribution do not have closed forms. However, using the fact that a gamma distribution is a special case of a generalized gamma distribution, Ye and Chen (2017) derived simple closed-form estimators for the two parameters of gamma distribution. The estimators were shown to be strongly consistent and asymptotically normal. For \( q,l = 1, \ldots ,Q\), let us define the quantities

$$\begin{aligned} \tilde{W}_{ql}&= \sum _{i \ne j, X_{ij} = 1} \hat{\tau }_{iq} \hat{\tau }_{jl}\\ \tilde{U}_{ql}&= \sum _{i \ne j, X_{ij} = 1} \hat{\tau }_{iq}\hat{\tau }_{jl} Y_{ij}\\ \tilde{V}_{ql}&= \sum _{i \ne j, X_{ij} = 1} \hat{\tau }_{iq}\hat{\tau }_{jl} \log Y_{ij}\\ \tilde{S}_{ql}&= \sum _{i \ne j, X_{ij} = 1} \hat{\tau }_{iq}\hat{\tau }_{jl} Y_{ij} \log Y_{ij} , \end{aligned}$$

then the updates for \(\alpha _{ql}, \beta _{ql}\) are given by

$$\begin{aligned} \tilde{\alpha }_{ql}&= \frac{ \tilde{W}_{ql} \tilde{U}_{ql} }{ \tilde{W}_{ql} \tilde{S}_{ql} - \tilde{V}_{ql} \tilde{U}_{ql} } \end{aligned}$$
(6)
$$\begin{aligned} \tilde{\beta }_{ql}&= \frac{ \tilde{W}_{ql}^{2} }{ \tilde{W}_{ql} \tilde{S}_{ql} - \tilde{V}_{ql} \tilde{U}_{ql} } \text{. } \end{aligned}$$
(7)

We obtain the variational estimators \((\tilde{\theta }, \tilde{\pi }, \tilde{\alpha }, \tilde{\beta })\) by computing (3), (4), (5), (6), (7) until convergence.

We now address the consistency of the variational estimators derived above. The following two propositions are the counterpart of Theorem 2 and 3 for variational estimators. We omit the proof since they follow similar arguments as the proof of Corollary 4.3 and Theorem 4.4 of Celisse et al. (2012).

Proposition 1

Assume that assumptions (A1), (A2), (A3), (A5) and (A6), and let \( (\tilde{\theta }, \tilde{\pi }, \tilde{\alpha }, \tilde{\beta } ) \) be the variational estimators defined above. Then for any distance \(d(\cdot ,\cdot )\) on \( ( \pi , \alpha , \beta ) \),

$$\begin{aligned} d( (\tilde{\pi }, \tilde{\alpha }, \tilde{\beta } ), ( \pi ^{*}, \alpha ^{*}, \beta ^{*} ) ) \xrightarrow [ n \rightarrow \infty ]{ \mathbb {P} } 0 . \end{aligned}$$

Proposition 2

Assume that the variational estimators \( (\tilde{\pi }, \tilde{\alpha }, \tilde{\beta }) \) converge at rate 1/n to \( (\pi ^{*}, \alpha ^{*}, \beta ^{*}) \), respectively, and assumptions (A1), (A2), (A3), (A5), (A6) hold. We have

$$\begin{aligned} d(\tilde{\theta }, \theta ^{*}) \xrightarrow [n \rightarrow \infty ]{\mathbb {P}} 0 , \end{aligned}$$

where d denotes any distance between vectors in \(\mathbb {R}^{Q}\).

Note a stronger assumption on the convergence rate 1/n of \( \tilde{\pi }, \tilde{\alpha }, \tilde{\beta } \) is assumed for Proposition 2 compared to \(\sqrt{\log n}/n\) in Theorem 3. The same assumption is also used in Theorem 4.4. of Celisse et al. (2012).

6 Choosing the number of classes

In real world applications, the number of classes is typically unknown and needs to be estimated from the data. For the SBM, a number of methods have been developed to determine the number of classes, including log-likelihood ratio statistic (Wang and Bickel 2017), composite likelihood (Saldaña et al. 2017), exact integrated complete data likelihood (Côme and Latouche 2015) and Bayesian framework (Yan 2016) based methods. Model selection for variants of SBM have also been investigated (Latouche et al. 2014).

We apply the integrated classification likelihood (ICL) criterion developed by Biernacki et al. (2000) to choose the number of classes for the WSBM. The ICL is an approximation of the complete-date integrated likelihood. The ICL criterion for SBM have been derived by Daudin et al. (2008) under the assumptions that the prior distribution of \( (\theta , \pi )\) factorizes and a non-informative Dirichlet prior on \(\theta \). Here we follow the approach of Daudin et al. (2008) to derive an approximate ICL for the WSBM.

Let \(m_{Q}\) denote the model with Q blocks, the ICL criterion is an approximation of the complete-data integrated likelihood:

$$\begin{aligned}&{{\mathcal {L}}}(Y_{[n]},X_{[n]},Z_{[n]} | m_{Q})&\\&\quad = \int _{ \theta , \pi , \alpha , \beta } {{\mathcal {L}}}(Y_{[n]}, X_{[n]}, Z_{[n]}; \theta , \pi , \alpha , \beta , m_{Q}) g(\theta , \pi , \alpha , \beta ) d \theta d \pi d\alpha d\beta&\text{. } \end{aligned}$$

where \( g(\theta , \pi , \alpha , \beta ) \) is the prior distribution of the parameters. Assuming a non-informative Jeffreys prior on \(\theta \), a Stirling approximation to the gamma function, and finally a BIC approximation to the conditional log-likelihood function, the approximate ICL can be derived. For a model \(m_{Q}\) with Q blocks, and assuming a non-informative Jeffreys prior \(Dir(0.5, \ldots , 0.5)\) on \(\theta \), the approximate ICL criterion is:

$$\begin{aligned} ICL(m_{Q})&= \max _{\theta , \pi , \alpha , \beta } \log L(Y_{[n]}, X_{[n]}, \tilde{Z}_{[n]}; \theta , \pi , \alpha , \beta , m_{Q} ) \\& \quad- \frac{3}{2} (Q(Q+1)) \log n(n-1) - \frac{Q-1}{2} \log n \end{aligned}$$

where \(\tilde{Z}_{[n]}\) is the estimate of \(Z_{[n]}\). The derivation of above follows exactly the same lines as the proof of Proposition 8 of Daudin et al. (2008).

7 Simulation

We validate the theoretical results developed in previous sections by conducting simulation studies. In particular, we investigate how fast parameter estimates of WSBM converge to their true values and the accuracy of posterior block allocations. Additionally, we investigate the performance of ICL in choosing the number of blocks.

7.1 Experiment 1 (two-class model)

For each fixed number of nodes n, 50 realizations of WSBM are generated based on the fixed parameter setting given in (8 and 9). The variational inference algorithm derived in Sect. 5 is then applied to estimate the model parameters and class allocations.

We can see from Table 1 that the estimated model parameters converge to their true values as the number of nodes increases while the posterior class assignment is accurate across any number of nodes. Table 2 shows the ICL criterion tends to select the correct number of classes, especially when the number of nodes is large.

$$\begin{aligned} \theta&= (0.7, 0.3) , \end{aligned}$$
(8)
$$\begin{aligned} \pi&= \left( \begin{array}{ccc} 0.8 &{} 0.2 \\ 0.3 &{} 0.9 \end{array} \right) , \alpha = \left( \begin{array}{ccc} 10.0 &{} 0.3 \\ 3.0 &{} 0.5 \end{array} \right) , \beta = \left( \begin{array}{ccc} 2.0 &{} 1.0 \\ 0.2 &{} 1.0 \end{array} \right) . \end{aligned}$$
(9)
Table 1 Convergence analysis of posterior class allocations and parameter estimates under the two-class model
Table 2 Frequency of choosing Q blocks by ICL under different number of nodes n under the two-class model

7.2 Experiment 2 (three-class model)

50 network realizations are obtained under the three-class model with parameter values given in (10 and 11) for a range of values n.

$$\begin{aligned} \theta&= (0.5,0.3,0.2) \end{aligned}$$
(10)
$$\begin{aligned} \pi&= \left( \begin{array}{ccc} 0.60 &{} 0.20 &{} 0.30 \\ 0.30 &{} 0.90 &{} 0.10 \\ 0.60 &{} 0.50 &{} 0.20 \\ \end{array} \right) , \alpha = \left( \begin{array}{ccc} 0.50 &{} 2.00 &{} 1.00 \\ 0.30 &{} 0.02 &{} 6.00 \\ 2.00 &{} 0.05 &{} 3.00 \\ \end{array} \right) , \beta = \left( \begin{array}{ccc} 5.00 &{} 0.40 &{} 5.00 \\ 3.00 &{} 12.00 &{} 0.70 \\ 6.00 &{} 0.20 &{} 0.60 \end{array} \right) . \end{aligned}$$
(11)

We can see from Table 3 that the estimated parameters converge to their true values quickly as the number of nodes increases. The ICL criterion tends to overestimate the number of classes when the number of nodes is small, but consistently selects the correct model when the number of nodes is large (Table 4).

Table 3 Convergence analysis of posterior class allocations and parameter estimates under the three-class model
Table 4 Frequency of choosing Q blocks by ICL under different number of nodes n under the three-block model

7.3 Computational complexity

The computational complexity of the variational estimators derived in Sect. 5 scales as \({{\mathcal {O}}}(n^2)\), which has the same complexity as the variational algorithm developed by Daudin et al. (2008). Therefore, the algorithm may be prohibitively expensive for networks with more than 1000 nodes. The estimated computational time for the two-class model in Sect. 7.1 and for the three-class model in Sect. 7.2 for various values of n are shown in Fig. 1. The estimated computing time at each n is the average running time of the variational algorithm over 20 replications.

Fig. 1
figure 1

Estimated computing time for the two-class and three-class models

8 Application: Washington bike data set

We apply the WSBM to analyse the Washington bike sharing scheme data set.Footnote 1 Information with respect to start stations and end stations of trips as well as length (travel time) of trips are available in the data set. We select a time window of one week staring from January 10th, 2016 and construct the adjacency matrix X and weight matrix Y as follows:

  • \(X_{ij} = 1\) if there is trip starting from station i and finishing at station j.

  • \(Y_{ij}\) is the total length of trips (in minutes) from station i to station j.

The resulting network consists of 370 nodes with an average out-degree of 36.14. The average total length of trips between any pair of stations is 42.15 minutes. We apply the ICL criterion to select the number of classes for the WSBM. For each number of classes Q, the variational inference algorithm is fitted to the network 20 times with 20 random initializations and the highest value of ICL is recorded, and the six-class model is chosen (Table 5). Each bike station is plotted on the map in Fig. 2 where its colour represents the estimated class assignment. We observe that bike stations in class 6 (colored in brown) tend to be concentrated in the central area of Washington wheraas stations in class 3 (colored in red) tend to be located further from the center. Figure 2 shows some spatial effect in the class assignment of bike stations whereby stations that are close in distance tend to be in the same cluster, with the exception of class 3 (colored in red). One potential extension of the model is to take into account the spatial locations of the bike stations as covariates.

The estimated class proportions \(\hat{\theta }\) shown in 12 indicate that class 3 has the largest number of stations whereas class 4 has the smallest number of stations. The estimated \(\hat{\pi }\) shows that within class connectivity is generally higher compared to between class connectivity. We further observe that the connection probabilities between bike stations in class 1, 4, and 6 are substantially higher. Interestingly, we observe a near symmetry in the matrix \(\hat{\pi }\) indicating that the probability of having a trip from a station in class k to another station in class l is similar with the probability of having a trip from class l to class k.

The estimated densities of travel time between each pair of classes are shown in Fig. 3. We observe that the majority of the estimated densities have mode and mean close to 0, particularly for the estimated densities in the diagonal of Fig. 3. This implies that the total travel times between stations in the same class are quite short. In comparison, the total travel time between stations in different classes tend to be longer. This is reasonable as the distance between bike stations in different classes tend to be longer which in turn requires longer travel time.

Fig. 2
figure 2

Bike stations. Class 1: blue. Class 2: green. Class 3: red. Class 4: cyan. Class 5: black. Class 6: brown (color figure online)

Fig. 3
figure 3

Estimated densities of total travel time between stations

Table 5 Model selection for the Washington Bike dataset using ICL criterion
$$\begin{aligned} \hat{\theta }&= (0.1985, 0.0975, 0.3256, 0.0270, 0.1377, 0.2136) \end{aligned}$$
(12)
$$\begin{aligned} \hat{\pi }&= \left( \begin{array}{cccccc} \mathbf {0.2847} &{} 0.0539 &{} 0.0036 &{} \mathbf {0.4268} &{} 0.0125 &{} \mathbf {0.2811} \\ 0.0594 &{} 0.0630 &{} 0.0172 &{} 0.1254 &{} 0.0156 &{} 0.0791 \\ 0.0049 &{} 0.0156 &{} 0.0185 &{} 0.0079 &{} 0.0092 &{} 0.0029 \\ \mathbf {0.4113} &{} 0.1140 &{} 0.0136 &{} \mathbf {0.8438} &{} 0.0532 &{} \mathbf {0.4494} \\ 0.0187 &{} 0.0318 &{} 0.0120 &{} 0.0910 &{} \mathbf {0.2159} &{} 0.0241 \\ \mathbf {0.2715} &{} 0.0514 &{} 0.0017 &{} \mathbf {0.4965} &{} 0.0159 &{} \mathbf {0.7268} \\ \end{array} \right) , \end{aligned}$$
(13)

9 Discussion

This paper proposes a weighted stochastic block model (WSBM) for networks. The proposed model is an extension of the stochastic block model. A variational inference strategy is developed for parameter estimation. Asymptotic properties of maximum likelihood estimators and variational estimators are derived, and the problem of choosing the number of classes is addressed by using an ICL criteria. Simulation studies are conducted to evaluate the performance of variational estimators and the use of ICL to determine the number of classes. The proposed model and inference methods are an illustrative data set.

It is straightforward to extend the WSBM to allow node covariates. Let \(w_i \in \mathbb {R}^{d}\) be the covariates for each node \(i=1, \ldots , n\), and let \(w_{ij}\) be the covariates for each pair of nodes, \(i,j=1, \ldots , n, i \ne j\). The edge probability \(p_{ij}\) between a pair of nodes ij with node i in block q and node j in block l can be modelled as

$$\begin{aligned} \log \frac{p_{ij}}{1 - p_{ij}} = \xi _{ql,0} + \xi _{ql,1}^{T} w_{ij} + \xi _{ql,2}^{T} w_i + \xi _{ql,3}^{T} w_j, \end{aligned}$$

where \(\xi _{ql,0} \in \mathbb {R}\) and \(\xi _{ql,1}, \xi _{ql,2}, \xi _{ql,3} \in \mathbb {R}^{d}\). Conditional on block assignments and existence of an edge between a pair of nodes ij, \(Y_{ij}\) can be modelled as a Gamma random variable with mean \(\mu _{ij}\) and variance \(\sigma _{ij}\) where

$$\begin{aligned} \mu _{ij}&= E(Y_{ij} | Z_i = q, Z_j = l, X_{ij} = 1) = \exp \big ( \phi _{ql, 0} + \phi _{ql, 1}^{T} w_{ij} + \phi _{ql, 2}^{T} w_i + \phi _{ql,3}^{T} w_j \big ), \\ \sigma _{ij}&= Var( Z_i = q, Z_j = l, X_{ij} = 1) = \nu _{ql} \end{aligned}$$

where \(\phi _{ql,0} \in \mathbb {R}\) and \(\phi _{ql,1}, \phi _{ql,2}, \phi _{ql,3} \in \mathbb {R}^{d}\).

Many possible future extensions are possible. First, it is desirable to investigate further theoretical properties of maximum likelihood and variational estimators of WSBM parameters such as asymptotic normality of the estimators. Furthermore, some of the assumptions imposed in this work in order to ensure consistency of the estimators maybe relaxed. Moreover, the number of blocks is assumed to be fixed in the asymptotic analysis of the estimators. It would be interesting to allow the number of blocks to grows as the number of nodes grows.