1 Introduction

The problem of learning the changes of interactions between random variables can be useful in many applications. For example, genes may regulate each other in different ways when external conditions are changed; the number of daily flu-like symptom reports in nearby hospitals may become correlated when a major epidemic disease breaks out; EEG signals from different regions of the brain may be synchronized/desynchronized when the subject is performing different activities. Spotting such changes in interactions may provide key insights into the underlying system.

The interactions among random variables can be formulated as undirected probabilistic graphical models, or Markov Networks (MNs) (Koller and Friedman 2009), expressing the interactions via the conditional independence. We consider a simple model: the pairwise MNs where the links are only encoded for single or pairs of random variables. Due to the Hammersley–Clifford theorem (Hammersley and Clifford 1971), the underlying joint probability density function can be represented as the product of univariate and bivariate factors.

As an important challenge, structure learning of MNs has also attracted a significant amount of attention. Earlier methods (Spirtes et al. 2000) use hypothesis testing to learn the conditional independence among random variables, which reflects the absences of edges. It is proved that such a problem is generally NP-hard (Chickering 1996). Methods restricted to a sub-class of graphical models (such as trees or forests) (Chow and Liu 1968; Geman and Geman 1984; Liu et al. 2011) also suffer from growing computational cost.

However, the Hammersley–Clifford theorem together with the recent breakthrough on sparsity-inducing methods (Tibshirani 1996; Zhao and Yu 2006; Wainwright 2009) gave birth to many sparse structure learning ideas where the sparse factorization of the joint/conditional density function was estimated to infer the underlying structure of the MN (Friedman et al. 2008; Banerjee et al. 2008; Meinshausen and Bühlmann 2006; Ravikumar et al. 2010). Although most works focused on parametric models, the structure learning has been conducted on semi-parametric ones in recent years (Liu et al. 2009, 2012).

There is also a trend of learning the changes between MNs (Zhang and Wang 2010; Liu et al. 2014; Zhao et al. 2014). Comparing to standard structure learning, the learning of changes views the problem in a more dynamic fashion: Instead of estimating a static pattern, we hope to obtain a dynamic one, namely “the change” by comparing two sets of data since in some applications, the static pattern may not be computationally tractable, or simply too hard to comprehend. However, the difference between two patterns may be represented by some simple incremental effects involving only a small number of nodes or bonds, Thus it takes much less effort to learn and understand.

One of the main uses of structural change learning is to spot responding variables in “controlled experiments” (Zhang and Wang 2010) where some key external factors of the experiments are altered, and two sets of samples are obtained. By discovering the changes in the MNs, we can see how random variables have responded to the change of the external stimuli.

In this paper, we first review a recently proposed method of structural change learning between MNs (Liu et al. 2014). This follows a simple idea: if the MNs are products of the pairwise factors, the ratio of two MNs must also be proportional to the ratios of those factors. Moreover, factors that do not change between two MNs will have no contribution to the ratio. This naturally suggests the idea of modelling the changes between two MNs P and Q as the ratio between two MN density functions \(p({\varvec{x}})\) and \(q({\varvec{x}})\). The ratio \(p({\varvec{x}})/q({\varvec{x}})\) is directly estimated from a one-shot estimation (Sugiyama et al. 2012). This density-ratio approach can work well even when each MN is dense (as long as the change is sparse).

We also present some very recent theoretical results along this line of research. These works prove the consistency of the density ratio method in the high-dimensional setting. The support consistency indicates the support of the estimated parameter converges to the support of the true parameter in probability. This is an important property for sparsity inducing methods. It is shown that under certain conditions the density ratio method recovers the correct parameter sparsity with high probability (Liu et al. 2017b). Moreover, Fazayeli and Banerjee introduced a theorem for the regularized density ratio estimator showing the estimation error, i.e., the \(\ell _2\) distance between the estimated parameter and the true parameter converges to zero under milder conditions.

As comparisons, we will also show a few alternative approaches to the change detection problem between MNs. The differential graphical model learning approach (Zhao et al. 2014) uses a covariance-precision matrix equality to learn changes without going through the learning of the individual MNs. The “jumping” MNs (Kolar and Xing 2012) setting considers a scenario where the observations are received as a sequence and multiple sub-sequences are generated via different parametrizations of MN.

We organize this paper as follows: Firstly, we introduce the problem formulation of learning changes between MNs in Sect. 2. Secondly, the density ratio approach and two other alternatives are explained in Sect. 3. Section 4 reviews the theoretical results of these approaches. Synthetic and real-world experiments are conducted in Sect. 5 to compare the performance of methods. Finally, in Sects. 6 and 7, we give a few possible future directions and conclude the current developments along this line of research.

2 Formulating changes

In this section, we focus on formulating the change of MNs using density ratio. At the end of this section, a few alternatives are also introduced.

2.1 Structural changes by parametric differences

Detecting changes naturally involves two sets of data. Consider independent samples drawn separately from two probability distributions P and Q on \({\mathbb {R}}^m\):

$$\begin{aligned} {\mathcal {X}}_p :=\{{\varvec{x}}_p^{(i)}\}_{i=1}^{n_p} \mathop {\sim }\limits ^{\mathrm {i.i.d.}}P \quad \text{and}\quad {\mathcal {X}}_q :=\{{\varvec{x}}_q^{(i)}\}_{i=1}^{n_q} \mathop {\sim }\limits ^{\mathrm {i.i.d.}}Q. \end{aligned}$$

We assume that P and Q belong to the family of Markov networks (MNs) consisting of univariate and bivariate factors, i.e., their respective probability densities p and q are expressed as

$$\begin{aligned} p({\varvec{x}};{\varvec{\theta }}^{(p)}) =\frac{1}{Z({\varvec{\theta }}^{(p)})}\exp \left( \sum _{u,v = 1, u\ge v}^{m} {\varvec{\theta }}^{(p)}_{u,v}{}^\top {\varvec{\psi }}_{u,v}(x_u,x_v) \right) , \end{aligned}$$
(1)

where \({\varvec{x}}= (x_{1},\ldots , x_{m})^\top \) is the m-dimensional random variable, \(\top \) denotes the transpose, \({\varvec{\theta }}^{(p)}_{u,v}\) is the b-dimensional parameter vector for the pair \((x_{u}, x_{v})\), and

$$\begin{aligned} {\varvec{\theta }}^{(p)}= \left( {\varvec{\theta }}^{(p)\top }_{1,1},\ldots , {\varvec{\theta }}^{(p)\top }_{m,1},{\varvec{\theta }}^{(p)\top }_{2,2},\ldots , {\varvec{\theta }}^{(p)\top }_{m,2},\ldots ,{\varvec{\theta }}^{(p)\top }_{m,m}\right) ^\top \end{aligned}$$

is the entire parameter vector. The feature function \({\varvec{\psi }}_{u,v}(x_{u},x_{v})\) is a bivariate vector-valued basis function, and \(Z({\varvec{\theta }}^{(p)})\) is the normalization factor defined as

$$\begin{aligned} Z({\varvec{\theta }}^{(p)}) = \int \exp \left( \sum _{u,v = 1, u\ge v}^{m} {\varvec{\theta }}^{(p)}_{u,v}{}^\top {\varvec{\psi }}_{u,v}(x_{u},x_{v})\right) {\mathrm{d}}{\varvec{x}}. \end{aligned}$$

\(q({\varvec{x}}; {\varvec{\theta }}^{(q)})\) is defined in the same way. Such a parametrization is generic when representing pairwise graphical models.

Directly estimating an MN in this generic form is challenging since \(Z({\varvec{\theta }}^{(p)})\) usually does not have a closed form except for a few special cases (e.g. Gaussian distribution). Markov Chain Monte Carlo (Robert and Casella 2005) is used to approximate such an integral. However, this would bring extra approximation errors.

Nonetheless, we can define changes between two MNs as the difference between their parameters. Therefore, given two parametric models \(p({\varvec{x}};{\varvec{\theta }}^{(p)})\) and \(q({\varvec{x}};{\varvec{\theta }}^{(q)})\), we hope to discover changes in parameters from P to Q, i.e.,

$$\begin{aligned} {\varvec{\delta }}= {\varvec{\theta }}^{(p)}-{\varvec{\theta }}^{(q)}. \end{aligned}$$

Note that by its definition, the changes are continuous. This is more advantageous than only considering discrete changes of the MN structure, since a weak change of interaction does not necessarily shatter or flip the bond between two random variables.

2.2 Density ratio modelling

An important observation is that although two MNs may be complex individually, their changes might be “simple” since many terms may be cancelled while taking the difference, i.e. \({\varvec{\theta }}^{(p)}_{u,v} - {\varvec{\theta }}^{(q)}_{u,v}\) might be zero. The key idea in (Liu et al. 2014) is to consider the ratio of p and q:

$$\begin{aligned} \frac{p({\varvec{x}}; {\varvec{\theta }}^{(p)})}{q({\varvec{x}}; {\varvec{\theta }}^{(q)})} \propto \exp \left( \sum _{u,v = 1, u\ge v}^m ({\varvec{\theta }}^{(p)}_{u,v}-{\varvec{\theta }}^{(q)}_{u,v})^\top {\varvec{\psi }}_{u,v}(x_{u},x_{v})\right) , \end{aligned}$$
(2)

where \({\varvec{\theta }}^{(p)}_{u,v}-{\varvec{\theta }}^{(q)}_{u,v}\) encodes the difference between \(P\) and \(Q\) for factor \({\varvec{\psi }}_{u,v}(x_{u},x_{v})\), i.e., \({\varvec{\theta }}^{(p)}_{u,v} - {\varvec{\theta }}^{(q)}_{u,v}\) is zero if there is no change in the factor \({\varvec{\psi }}_{u,v}(x_{u},x_{v})\).

Once the ratio of p and q is considered, each parameter \({\varvec{\theta }}^{(p)}_{u,v}\) and \({\varvec{\theta }}^{(q)}_{u,v}\) does not have to be estimated. Their difference \({\varvec{\delta }}_{u,v}={\varvec{\theta }}^{(p)}_{u,v} - {\varvec{\theta }}^{(q)}_{u,v}\) is sufficient for change detection, as \({\varvec{x}}\) only interacts with such a parametric difference in the ratio model. Thus, in this density-ratio formulation, p and q are no longer modelled separately, but directly as

$$\begin{aligned} r({\varvec{x}};{\varvec{\delta }}) = \frac{1}{N({\varvec{\delta }})} \exp \left( \sum _{u,v = 1, u\ge v}^m {\varvec{\delta }}_{u,v}^\top {\varvec{\psi }}_{u,v}(x_{u},x_{v})\right) , \end{aligned}$$
(3)

where \(N({\varvec{\delta }})\) is the normalization term. This direct formulation also halves the number of parameters from both \({\varvec{\theta }}^{(p)}\) and \({\varvec{\theta }}^{(q)}\) to only \({\varvec{\delta }}\).

The normalization term \(N({\varvec{\delta }})\) is chosen to fulfill \(\int q({\varvec{x}})r({\varvec{x}};{\varvec{\delta }}) {\mathrm{d}}{\varvec{x}}= 1\):

$$\begin{aligned} N({\varvec{\delta }}) = \int q({\varvec{x}}) \exp \left( \sum _{u,v = 1, u\ge v}^m {\varvec{\delta }}_{u,v}^\top {\varvec{\psi }}_{u,v}(x_{u},x_{v}) \right) {\mathrm{d}}{\varvec{x}}, \end{aligned}$$

which is the expectation over \(q({\varvec{x}})\).Footnote 1 Note this integral is with respect to a true distribution where our samples are generated.Footnote 2 This expectation form of the normalization term is another notable advantage of the density-ratio formulation because it can be easily approximated by the sample average over \(\{{\varvec{x}}_{q}^{(i)}\}_{i=1}^{n_q}\mathop {\sim }\limits ^{\mathrm {i.i.d.}}Q\):

$$\begin{aligned} \hat{N}\left( {\varvec{\delta }}; {\varvec{x}}_q^{(1)},\ldots , {\varvec{x}}_q^{(n_q)}\right) := \frac{1}{n_q}\sum _{i=1}^{n_q} \exp \left( \sum _{u,v = 1, u\ge v}^m {\varvec{\delta }}_{u,v}^\top {\varvec{\psi }}_{u,v}(x_{q,u}^{(i)}, x_{q,v}^{(i)}) \right) . \end{aligned}$$

Thus, one can always use this empirical normalization term for any (non-Gaussian) models \(p({\varvec{x}}; {\varvec{\theta }}^{(p)})\) and \(q({\varvec{x}}; {\varvec{\theta }}^{(q)})\).

Interestingly, if one uses \(\psi _{u,v}(x_u x_v) = x_u x_v\) in the ratio model, it does not mean one assumes two individual MNs are Gaussian or Ising, it simply means we assume the changes of interactions are linear while other non-linear interactions remain unchanged. This formulation allows us to consider highly complicated MNs as long as their changes are “simple”.

Throughout the rest of the paper, we simplify the notation from \({\varvec{\psi }}_{u,v}\) to \({\varvec{\psi }}\) by assuming the feature functions are the same for all pairs of random variables.

2.3 Quasi log-likelihood equality

Density ratio is not the only direct modelling approach. Particularly for Gaussian MNs, where two distributions are parametrized as \(p({\varvec{x}};{\varvec{\varTheta }}^{(p)})\) and \(p({\varvec{x}};{\varvec{\varTheta }}^{(q)})\) with the precision matrix \({\varvec{\varTheta }}\), one alternative was proposed using the following equality (Zhao et al. 2014):

$$\begin{aligned} {\varvec{\varSigma }}^{(p)}({\varvec{\varTheta }}^{(p)} - {\varvec{\varTheta }}^{(q)}){\varvec{\varSigma }}^{(q)} + {\varvec{\varSigma }}^{(p)} - {\varvec{\varSigma }}^{(q)} = {\varvec{0}}, \end{aligned}$$
(4)

where \({\varvec{\varSigma }}^{(p)}\) is the covariance matrix of the Gaussian distribution p. As we replace the covariance matrices \({\varvec{\varSigma }}^{(p)}\) and \({\varvec{\varSigma }}^{(q)}\) with their sample versions \(\widehat{{\varvec{\varSigma }}}^{(p)}\) and \(\widehat{{\varvec{\varSigma }}}^{(q)}\), it can be seen that \({\varvec{\varTheta }}^{(p)} - {\varvec{\varTheta }}^{(q)}\) is the only variable interacting with the data. Therefore, one may replace it with a single parameter \({\varvec{\varDelta }}\) and later minimize the sample version of (4) (see Sect. 3.3 for details).

This direct formulation specifically uses a property of Gaussian MN that the covariance matrix computed from the data and the precision matrix that encodes the MN structure should approximately cancel each other when multiplied. However, such a relationship does not hold for other distributions in general. Studies on the generality of this equality is an interesting open question (see Sect. 6).

Remark

In fact, it is not necessary to combine \({\varvec{\theta }}^{(p)}- {\varvec{\theta }}^{(q)}\) in (2) (or \({\varvec{\varTheta }}^{(p)} - {\varvec{\varTheta }}^{(q)}\) in (4)) into one parameter. However, such a model will be unidentifiable since there are too many combinations of \({\varvec{\varTheta }}^{(p)}\) and \({\varvec{\varTheta }}^{(q)}\) can produce the same difference. Nonetheless, such an indirect modelling may still be useful when the individual structures of the MNs are also our interests. We review an example of such indirect modelling in Sect. 3.4.

3 Learning sparse changes in Markov networks

3.1 Density ratio estimation

Density ratio estimation has been recently introduced to the machine learning community and is proven to be useful in a wide range of applications (Sugiyama et al. 2012). In Liu et al. (2014), a density ratio estimator called the Kullback–Leibler importance estimation procedure (KLIEP) for log-linear models (Sugiyama et al. 2008; Tsuboi et al. 2009) was employed in learning structural changes.

For a density ratio model \(r({\varvec{x}}; {\varvec{\delta }})\) (as introduced in (3)), the KLIEP method minimizes the Kullback–Leibler divergence from \(p({\varvec{x}})\) to \(\hat{p}({\varvec{x}};{\varvec{\delta }}) = q({\varvec{x}}) r({\varvec{x}};{\varvec{\delta }})\):

$$\begin{aligned} \mathrm {KL}[p\Vert \hat{p}_{\varvec{\delta }}] = \int p({\varvec{x}}) \log \frac{p({\varvec{x}})}{q({\varvec{x}})r({\varvec{x}};{\varvec{\delta }})} {\mathrm{d}}{\varvec{x}}=\text{Const.} - \int p({\varvec{x}}) \log r({\varvec{x}}; {\varvec{\delta }}) {\mathrm{d}}{\varvec{x}}. \end{aligned}$$
(5)

Note that the density ratio model (3) automatically satisfies the non-negativity and normalization constraints:

$$\begin{aligned} r({\varvec{x}};{\varvec{\delta }}) > 0 \quad \text{and}\quad \int q({\varvec{x}}) r({\varvec{x}}; {\varvec{\delta }}) {\mathrm{d}}{\varvec{x}}= 1. \end{aligned}$$

Here we define

$$\begin{aligned} \hat{r}({\varvec{x}}; {\varvec{\delta }}) = \frac{ \exp \left( {\sum _{u,v = 1, u\ge v}^m {\varvec{\delta }}_{u,v}^\top {\varvec{\psi }}(x_{u},x_{v})}\right) }{\hat{N}({\varvec{\delta }}; {\varvec{x}}_q^{(1)},\ldots , {\varvec{x}}_q^{(n_q)})} \end{aligned}$$

as the empirical density ratio model. In practice, one minimizes the negative empirical approximation of the rightmost term in Eq. (5):

$$\begin{aligned} \ell _{\mathrm {KLIEP}}({\varvec{\delta }}; {\mathcal {X}}_p, {\mathcal {X}}_q)= & {} -\frac{1}{n_p}\sum _{i=1}^{n_p} \log \hat{r}({\varvec{x}}_{p}^{(i)}; {\varvec{\delta }})\\ = & {} - \frac{1}{n_p}\sum _{i=1}^{n_p} \sum _{u,v = 1, u\ge v}^m {\varvec{\delta }}_{u,v}^\top {\varvec{\psi }}(x_{p,u}^{(i)},x_{p,v}^{(i)}) \\ &+\,\log \left( \frac{1}{n_q}\sum _{i=1}^{n_q} \exp \left( \sum _{u,v = 1, u\ge v}^m {\varvec{\delta }}_{u,v}^\top {\varvec{\psi }}(x_{q,u}^{(i)},x_{q,v}^{(i)})\right) \right) , \end{aligned}$$

Optimization Since \(\ell _{\mathrm {KLIEP}}({\varvec{\delta }})\) consists of a linear part and a log-sum-exp function (Boyd and Vandenberghe 2004), it is convex with respect to \({\varvec{\delta }}\), and its global minimizer can be numerically found by standard optimization techniques such as gradient descent. The gradient of \(\ell _{\mathrm {KLIEP}}\) with respect to \({\varvec{\delta }}_{u,v}\) is given by

$$\begin{aligned} \nabla _{{\varvec{\delta }}_{u,v}} \ell _{\mathrm{KLIEP}}({\varvec{\delta }}) = -\frac{1}{n_p}\sum _{i=1}^{n_p} {\varvec{\psi }}(x_{p,u}^{(i)},x_{p,v}^{(i)}) + \frac{1}{n_q} \sum _{i=1}^{n_q} \hat{r}({\varvec{x}}^{(i)}; {\varvec{\delta }}) {\varvec{\psi }}(x_{q,u}^{(i)},x_{q,v}^{(i)}), \end{aligned}$$
(6)

that can be computed in a straightforward manner for any feature vector \({\varvec{\psi }}(x_{u},x_{v})\).

3.2 Sparsity inducing and regularizations

In the search for sparse changes, one may regularize the KLIEP solution with a sparsity-inducing norm \(\sum _{u\ge v} \Vert {\varvec{\delta }}_{u,v} \Vert \), i.e., the group-lasso penalty (Yuan and Lin 2006) where we use \(\Vert \cdot \Vert \) to denote the \(\ell _2\) norm.

Note that the density-ratio approach (Liu et al. 2014) directly sparsifies the difference \({\varvec{\theta }}^{(p)}-{\varvec{\theta }}^{(q)}\), and thus intuitively this method can still work well even if \({\varvec{\theta }}^{(p)}\) and \({\varvec{\theta }}^{(q)}\) are dense as long as \({\varvec{\theta }}^{(p)}-{\varvec{\theta }}^{(q)}\) is sparse. The following is the objective function used in (Liu et al. 2014):

$$\begin{aligned} \hat{{\varvec{\delta }}} = \mathop {\mathrm{argmin}}\limits _{{\varvec{\delta }}} \ell _{\mathrm{KLIEP}}({\varvec{\delta }}) + \lambda \sum _{u,v = 1, u \ge v}^m\Vert {\varvec{\delta }}_{u,v} \Vert . \end{aligned}$$
(7)

In a recent work (Fazayeli and Banerjee 2016), authors considered structured changes, such as sparse, block sparse, node-perturbed sparse and so on. These structured changes can be represented via suitable atomic norms (Chandrasekaran et al. 2012; Mohan et al. 2014). For example, a KLIEP objective with a node-perturbation regularizer is

$$\begin{aligned}&{\hat{{\varvec{\varDelta }}}} = \mathop {\mathrm{argmin}}\limits _{{\varvec{\varDelta }}\in {\mathbb {R}}^{m \times m}, L\in {\mathbb {R}}^{m\times m}} \ell _{\mathrm{KLIEP}}({\varvec{\varDelta }}) + \lambda _1\Vert {\varvec{\varDelta }}\Vert _1 + \lambda _2 \sum _{v=1}^m \left( \sum _{u=1}^m L_{u,v}^k\right) ^{\frac{1}{k}}\nonumber \\ &\text{subject to: } {\varvec{\varDelta }}= {\varvec{L}}+ {\varvec{L}}^\top , \end{aligned}$$
(8)

Such a regularization can be used to discover perturbed nodes i.e., nodes that have a completely different connectivity pattern to other nodes among two networks.

Optimization Although the original objective of KLIEP was smooth and convex, the sparsity inducing norms are in general non-smooth. Proximal gradient methods, such as Fast Iterative Shrinkage Thresholding Algorithms (FISTA) (Beck and Teboulle 2009) can be utilized to solve regularized KLIEP objectives. A FISTA-like algorithm was proposed in (Fazayeli and Banerjee 2016) with a faster rate of convergence.

3.3 Covariance-precision matching

As mentioned above, the density ratio formulation is not the only way that may motivate the direct modelling. For the formulation using the equality (4), we can solve the following sparsity inducing objective which was introduced in Zhao et al. (2014).

$$\begin{aligned} \hat{{\varvec{\varDelta }}} = \mathop {\mathrm{argmin}}\limits _{{\varvec{\varDelta }}} \Vert {\varvec{\varDelta }}\Vert _1 ~~ \text{subject to } \Vert \hat{{\varvec{\varSigma }}}^{(p)} {\varvec{\varDelta }}\hat{{\varvec{\varSigma }}}^{(q)} + \hat{{\varvec{\varSigma }}}^{(p)} - \hat{{\varvec{\varSigma }}}^{(q)}\Vert _{\infty } \le \epsilon , \end{aligned}$$
(9)

where \(\epsilon \) is a hyper-parameter. To obtain a sparse solution, we set a threshold for the solution at a certain level \(\tau \), i.e. the value for \(|\hat{\varDelta }_{u,v}|<\tau \) is rounded to 0. The constraint enforces the equality (4) and we used single parameter \({\varvec{\varDelta }}\) replacing \({\varvec{\varTheta }^{(p)}}- {\varvec{\varTheta }^{(q)}}\).

Optimization This method is quite computationally demanding as the dimension m grows. The Alternating Direction Method of Multipliers (ADMM) procedure (Boyd et al. 2011) was implemented based on an augmented version of (9) (see Section 3.3, Zhao et al. 2014 for details).

3.4 Maximizing joint likelihood

As it was mentioned in Sect. 2.3, one does not have to use the direct modelling to learn sparse changes between MNs. In fact, separated modelling may not only discover changes, but also can recover the individual MN themselves. Recently, a method based on fused-lasso (Tibshirani et al. 2005) has been developed (Zhang and Wang 2010). This method also sparsifies \({\varvec{\theta }}^{(p)}- {\varvec{\theta }}^{(q)}\) directly.

The original method conducts feature-wise neighborhood regression (Meinshausen and Bühlmann 2006) jointly for P and Q, which can be conceptually understood as maximizing the local conditional Gaussian likelihood jointly on each random variable t. A slightly more general form of the learning criterion may be summarized as

$$\begin{aligned}&\min _{{\varvec{\theta }}^{(p)}_{t} \in {\mathbb {R}}^{m-1}, {\varvec{\theta }}^{(q)}_{t}\in {\mathbb {R}}^{m-1}} \ell _{t}({\varvec{\theta }}^{(p)}_t; {\mathcal {X}}_p)+ \ell _{t}({\varvec{\theta }}^{(q)}_t; {\mathcal {X}}_q))\nonumber \\ &\qquad +\, \lambda _1 (\Vert {\varvec{\theta }}^{(p)}_t\Vert _1+\Vert {\varvec{\theta }}^{(q)}_t\Vert _1) + \lambda _2 \Vert {\varvec{\theta }}^{(p)}_{t}-{\varvec{\theta }}^{(q)}_{t}\Vert _1, \end{aligned}$$
(10)

where \(\ell _{t}({\varvec{\theta }};{\mathcal {X}}_p)\) is the negative log conditional likelihood for the t-th random variable \(x_t\in {\mathbb {R}}\) given the rest \({\varvec{x}}_{\backslash t} \in {\mathbb {R}}^{m-1}\):

$$\begin{aligned} \ell _{t}({\varvec{\theta }};{\mathcal {X}}_p) = -\frac{1}{n_p}\sum _{i=1}^{n_p}\log p(x_{p,t}^{(i)}|{\varvec{x}}_{p,\backslash t}^{(i)};{\varvec{\theta }}), \end{aligned}$$

where each dimension of \({\varvec{\theta }}\) corresponds to one of its potential neighborhood. \(\ell _{t}({\varvec{\theta }};{\mathcal {X}}_q)\) is defined in the same way as \(\ell _{t}({\varvec{\theta }};{\mathcal {X}}_p)\).

Since the Fused-lasso-based method directly sparsifies the changes in MN structure, it can work well even when each MN is not sparse (when \(\lambda _1\) is set to 0).

Learning Changes in Sequence Another recent development (Kolar and Xing 2012) along this line of research assumes the data points are received sequentially, i.e., we observe \({\varvec{x}}^{(1)}, {\varvec{x}}^{(2)},\ldots , {\varvec{x}}^{(T)}\) over time points \({\mathcal{T}} = \{1, 2,\ldots , T\}\). Suppose \({\mathcal{T}}\) can be segmented into K disjoint unknown subsets: \({\mathcal{T}} = \cup _{k\in \{1\dots K\}}{\mathcal{T}}_k\) and \({\varvec{x}}_{{\mathcal{T}}_k} \sim p({\varvec{x}}, {\varvec{\theta }}^{({\mathcal{T}}_k)})\). The task is to segment such a sequence and learn an estimate \(\widehat{{\varvec{\theta }}}^{({\mathcal{T}}_k)}\) for each segment. We can extend the idea of fused-lasso in (10), and maximize the joint likelihood over each single observation:

$$\begin{aligned} \mathop {\mathrm{argmin}}\limits _{{\varvec{\theta }}^{(i)}, i \in \{1 \dots T\}} \sum _{i=1}^T \ell ({\varvec{\theta }}^{(i)}; {\varvec{x}}^{(i)}) + \lambda _1 \sum _{i=1}^T \Vert {\varvec{\theta }}^{(i)}\Vert _1 + \lambda _2 \sum _{i=1}^{T-1} \Vert {\varvec{\theta }}^{(i+1)} - {\varvec{\theta }}^{(i)}\Vert _1, \end{aligned}$$

where the fused lasso term sparsifies the changes between MNs at adjacency time points, thus the learned \({\varvec{\theta }}^{(1)}, {\varvec{\theta }}^{(2)},\ldots , {\varvec{\theta }}^{(T)}\) is “piecewise-constant” and the segments are automatically determined from it. A block-coordinate descent procedure was proposed to solve this problem efficiently (Kolar et al. 2010).

4 Theoretical analysis

The KLIEP algorithm does not only perform well in practice, but is also justified theoretically. In this section, we first introduce the support recovery theorem of KLIEP and then review some recent theoretical developments of direct change learning.

4.1 Preliminaries

In the previous section, a sub-vector of \({\varvec{\delta }}\) indexed by a pair (uv) corresponds to a specific edge of an MN. From now on, we switch to a “unitary” index system as our analysis is not dependent on the edge nor the structure setting of the graph.

We introduce the “true parameter” notation \({\varvec{\delta }}^*, p({\varvec{x}})=q({\varvec{x}})r({\varvec{x}};{\varvec{\delta }}^*),\) and the pairwise index set \(E = \{(u,v) | u\ge v\}\). Two sets of sub-vector indices regarding \({\varvec{\delta }}^*\) and E are defined as \(S = \{t'\in E ~|~ \Vert {\varvec{\delta }}^*_{t'}\Vert \ne 0\}, S^c = \{t'' \in E ~|~ \Vert {\varvec{\delta }}^*_{t''}\Vert = 0\}.\) We rewrite the objective (7) as

$$\begin{aligned} \hat{{\varvec{\delta }}} = \mathop {\mathrm{argmin}}\limits _{{\varvec{\delta }}} \ell _{\mathrm {KLIEP}}({\varvec{\delta }})+\lambda _{n_p} \sum _{t\in S \cup S^c}\Vert {\varvec{\delta }}_{t}\Vert . \end{aligned}$$
(11)

Similarly we can define \({\hat{S}} = \{t' \in E ~|~ \Vert \hat{{\varvec{\delta }}}_{t'}\Vert \ne 0\}\) and \(\hat{S^c}\) accordingly.

Sample Fisher information matrix \({\mathcal {I}} \in {\mathbb {R}}^{\frac{b(m^2+m)}{2} \times \frac{b(m^2+m)}{2}}\) denotes the Hessian of the log-likelihood: \({\mathcal {I}} = \nabla ^2 \ell _{\mathrm{KLIEP}} ({\varvec{\delta }}^*)\). \({\mathcal {I}}_{AB}\) is a sub-matrix of \({\mathcal {I}}\) indexed by two sets of indices \(A, B \subseteq E\) are indices on rows and columns.

In this section, we prove the support consistency, i.e. with high probability that \(S={\hat{S}},S_c={\hat{S}}_c\) (see e.g., Chapter 11 in Hastie et al. 2015 for an introduction of support consistency).

4.2 Assumptions

We try not to impose assumptions directly on each individual MNs, as the essence of KLIEP method is that it can handle various changes regardless the types of individual MNs.

The first two assumptions are essential to many support consistency theorems (e.g. Eqs. (15) and (16) in Wainwright 2009, Assumption A1 and A2 in Ravikumar et al. 2010). These assumptions are made on the Fisher information matrix.

Assumption 1

(Dependency assumption) The sample Fisher information submatrix \({\mathcal {I}}_{{SS}}\) has bounded eigenvalues: \( \varLambda _{\mathrm{min}}({\mathcal {I}}_{{SS}}) \ge \lambda _{\mathrm{min}} > 0, \) with probability \(1-\xi _q\), where \(\varLambda _{{\mathrm {min}}}\) is the minimum-eigenvalue operator of a symmetric matrix.

This assumption on the submatrix of \({\mathcal {I}}\) is to ensure that the density ratio model is identifiable and the objective function is “reasonably convex”.

Assumption 2

(Incoherence assumption) \( \max _{t'' \in S^c}\Vert {\mathcal {I}}_{t''S} {\mathcal {I}}_{SS}^{-1}\Vert _1 \le 1-\alpha , 0<\alpha \le 1. \) with probability 1, where \(\Vert Y\Vert _1 = \sum _{i,j} \Vert Y_{i,j}\Vert _1\).

This assumption says the unchanged edges cannot exert overly strong effects on changed edges. Note this assumption is sometimes called “irrepresentability” condition.

Assumption 3

(Smoothness assumption on likelihood ratio) The log-likelihood ratio \(\ell _{\mathrm {KLIEP}}({\varvec{\delta }})\) is smooth around its optimal value, i.e., it has bounded derivatives

$$\begin{aligned}&\max _{{\varvec{u}}, \Vert {\varvec{u}}\Vert \le \Vert {\varvec{\delta }}^*\Vert }\left\| \nabla ^2 \ell _{\mathrm {KLIEP}}({\varvec{\delta }}^*+{\varvec{u}})\right\| \le \lambda _{\mathrm {max}}< \infty ,\\ &\max _{t\in S \cup S^c} \max _{{\varvec{u}}, \Vert {\varvec{u}}\Vert \le \Vert {\varvec{\delta }}^*\Vert } {\left| \left| \left| \nabla _{{\varvec{\delta }}_t}\nabla ^2 \ell _{\mathrm {KLIEP}}({\varvec{\delta }}^* + {\varvec{u}}) \right| \right| \right| } \le \lambda _{3,{\mathrm {max}}}<\infty , \end{aligned}$$

with probability 1.

\(\left\| \cdot \right\|, {\left| \left| \left| \cdot \right| \right| \right| }\) are the spectral norms of a matrix and a tensor, respectively (see, e.g., Tomioka and Suzuki 2014 for the definition of spectral norm of a tensor). This assumption guarantees the log-likelihood function is well-behaved. Now, we state the following assumptions on the density ratio:

Assumption 4

(Correct model assumption) The density ratio model is correct, i.e. there exists \({\varvec{\delta }}^*\) such that

$$\begin{aligned} p({\varvec{x}}) = r({\varvec{x}};{\varvec{\delta }}^*)q({\varvec{x}}). \end{aligned}$$

Although analysing the mis-specified ratio model (Kanamori et al. 2010) is certainly an interesting open question, we focus on correctly specified models in this section.

Assumption 5

(Smooth density ratio assumption) For any vector \({\varvec{u}}\in {\mathbb {R}}^{\mathrm {dim}({\varvec{\delta }}^*)}\) such that \(\Vert {\varvec{u}}\Vert \le \Vert {\varvec{\delta }}^*\Vert \) and every \(a\in {\mathbb {R}}\), the following inequality holds:

$$\begin{aligned} {\mathbb{E}}_q [\exp (a(r({\varvec{x}}, {\varvec{\delta }}^* + {\varvec{u}}) - 1 ))] \le \exp (Ma^2), \end{aligned}$$

where \(M>0\) is a constant independent from m. This assumption states that the density ratio model, around its optimal parameter, should not often obtain large values over samples from Q.

4.3 Successful support recovery of KLIEP (Liu et al. 2017a, b)

Theorem 1

Suppose that Assumptions 15 as well as

$$\begin{aligned} \min _{t'\in S} \Vert {\varvec{\delta }}^*_{t'}\Vert \ge \frac{10}{\lambda _{\mathrm{min}}} \sqrt{d}\lambda _{n_p} \end{aligned}$$
(12)

are satisfied, where d is the number of changed edges defined as \(d = |S|\), i.e., the cardinality of the set of non-zero parameter groups. Suppose also that the regularization parameter is chosen so that

$$\begin{aligned} M_1 \sqrt{\frac{{\log \frac{m^2+m}{2}}}{n_p}} \le \lambda _{n_p} \le M_2 \min \left( \frac{{\Vert {{\varvec{\delta }}}^{*}\Vert }}{\sqrt{b}}, 1\right) , \end{aligned}$$
(13)

and \(n_q \ge M_3 n_p^2\), where \(M_1, M_2\) and \(M_3\) are constants. Then there exist some constants \(L_1\), \(K_1\), and \(K_2\) such that if \(n_p\ge L_1 d^2\log \frac{m^2+m}{2}\), with the probability at least

$$\begin{aligned} 1- \exp \left( - K_1 \lambda _{n_p}^2n_p \right) - 4\exp \left( -K_2 dn_q \lambda _{n_p}^4 \right) - \xi _q, \end{aligned}$$

the following properties hold:

  • Unique Solution: The solution of (11) is unique.

  • Successful Change Detection: \({\hat{S}} = S\) and \({\hat{S}}^c = S^c\).

The proof of this theorem follows the Primal-dual witness construction (see, e.g., Section 11.4.2 in Hastie et al. 2015).

Remark

The main conclusion of this theorem states that if the regularization parameter is reasonably chosen (13) and the true non-zero groups \(\Vert {\varvec{\delta }}^*_{t'}\Vert , {t'}\in S\) is large enough (12), with high probability, we are guaranteed to have the correct support of parameters. The samples needed for \(n_p\) only grow linearly with \(\log m\) and \(n_q\) grows quadratically with \(n_p\).

4.4 \(\ell _2\) Consistency of KLIEP with atomic norm (Fazayeli and Banerjee 2016)

As it was introduced in Sect. 3.2, atomic norms can be used to learn changes with special topological structures. Instead of support recovery, we focus on the \(\ell _2\) loss between the estimated parameter \(\hat{{\varvec{\delta }}}\) and the true parameter \({\varvec{\delta }}^*\), i.e., \(\Vert {\varvec{\delta }}^* - \hat{{\varvec{\delta }}}\Vert \).

First, we generalize our objective function as

$$\begin{aligned} \hat{{\varvec{\delta }}} = \mathop {\mathrm{argmin}}\limits _{{\varvec{\delta }}\in {\mathbb {R}}^{\frac{m^2+m}{2}}} \ell _{\mathrm {KLIEP}}({\varvec{\delta }})+ \lambda _{n_p,n_q} R({\varvec{\delta }}), \end{aligned}$$
(14)

where R is an atomic norm function.

Such a theorem relies on the Restricted Strong Convex (RSC) property on the Error Set of the objective function. Intuitively, if \(\ell _{\mathrm {KLIEP}}({\varvec{\delta }})\) is “highly curved”, small \(|\ell _{\mathrm {KLIEP}}(\hat{{\varvec{\delta }}}) - \ell _{\mathrm {KLIEP}}({\varvec{\delta }}^*)|\) ensures small \(\Vert \hat{{\varvec{\delta }}} - {\varvec{\delta }}^*\Vert \). Thus we only need to figure out how \(|\ell _{\mathrm {KLIEP}}(\hat{{\varvec{\delta }}}) - \ell _{\mathrm {KLIEP}}({\varvec{\delta }}^*)|\) reaches zero as number of samples goes to infinity and this is a more accessible target.

To make sure our objective has such a “strongly convex” curvature, one needs to impose a uniform lower-bound on the eigenvalues of the objective Hessian (a.k.a., sample Fisher information matrix \({\mathcal {I}}\)). However, this is not realistic for the high-dimensional setting, as \({\mathcal {I}}\) is certainly rank-deficient. As an alternative, we impose an assumption on the convexity of \(\ell_{{\rm KLIEP}}\) over a constrained set:

Restricted strong convex condition The function \(\ell \) is Restricted Strong Convex (RSC) at a cone C if there exists a constant \(\kappa \) such that \(\forall {\varvec{u}}\in C\)

$$\begin{aligned} \ell ({\varvec{\delta }}^* + {\varvec{u}}) - \ell ({\varvec{\delta }}^*) - \langle {\varvec{u}}, \nabla _\ell ({\varvec{\delta }}^*)\rangle \ge \kappa \Vert {\varvec{u}}\Vert ^2. \end{aligned}$$

If \({\varvec{\delta }}^* - \hat{{\varvec{\delta }}} \in C\), it is possible to obtain a deterministic bound (Theorem 2 in Banerjee et al. 2014) on the \(\ell _2\) estimation error

$$\begin{aligned} \Vert {\varvec{\delta }}^* - \hat{{\varvec{\delta }}}\Vert _2 = O\left( \frac{\lambda _{n_p, n_q}}{\kappa }\varPsi (C)\right) , \end{aligned}$$

where \(\varPsi (C)\) is the norm compatibility constant (Negahban et al. 2009) and can be easily bounded. Note that although this bound itself is not probabilistic, the parameter \(\lambda _{n_p, n_q}\) is random and the RSC may hold with a probability. One can infer the sample complexity from these bounds.

Two things remain to be shown. First, we need to find such a cone which contains \(\hat{{\varvec{\delta }}} - {\varvec{\delta }}^*\). Second, we need to prove \(\ell _{{\rm KLIEP}}\) is RSC on this cone. We start with the first problem.

Error set (Lemma 1 in Banerjee et al. 2014) For any convex loss \(\ell ({\varvec{\delta }})\), if \(\lambda _{n_p, n_q}\) is large enough, i.e.,

$$\begin{aligned} \lambda _{n_p, n_q}\ge \beta R^*(\nabla \ell ({\varvec{\delta }}^*) ),\quad \beta > 1 \end{aligned}$$

where \(R^*\) is the dual norm of R, it can be proven that the estimation error \({\varvec{u}}= {\varvec{\delta }}^* - \hat{{\varvec{\delta }}}\) lies in an Error Set:

$$\begin{aligned} E_r = \left\{ {\varvec{u}}, {\varvec{u}}\in {\mathrm{dom}}({\varvec{\delta }}) \bigg | R({\varvec{\delta }}^* + {\varvec{u}}) \le R({\varvec{\delta }}^*) + \frac{1}{\beta } R({\varvec{u}}) \right\} , \end{aligned}$$

where \({\mathrm{dom}}({\varvec{y}})\) is the domain of \({\varvec{y}}\). Let us define \(C_r = {\mathrm{cone}}(E_r)\).

In fact, it can be shown that if

$$\begin{aligned} \lambda _{n_p. n_q} \ge \frac{c\cdot (w(\varOmega _R) + \epsilon )}{\sqrt{\min (n_p, n_q)}}, \end{aligned}$$

where w(A) is the Gaussian width of a set A (Ledoux and Talagrand 2013) and \(\varOmega _R = \{{\varvec{u}}| R({\varvec{u}}) \le 1\}\), then \(\lambda _{n_p, n_q}\ge \beta R^*(\nabla \ell ({\varvec{\delta }}^*) )\) holds automatically with high probability (Theorem 1 in Fazayeli and Banerjee 2016). Now we have a cone \(C_r\) where \(\hat{{\varvec{\delta }}} - {\varvec{\delta }}^*\) resides.

As to the second problem, it can be proven that \(\ell _{{\rm KLIEP}}\) is RSC at \(C_r\) with high probability once \(n_q \ge n_0, n_0 = w^2(C_r \cap S)\), where S is a unit hypersphere (Theorem 2 in Fazayeli and Banerjee 2016). Thus \(n_0\) is the minimum number of samples required from Q to be able to apply this theorem.

Putting everything together, we have the main theorem proved in Fazayeli and Banerjee (2016):

Theorem 2

(\(\ell _2\) Consistency of atomic norms) If Assumption 5 holds, and \(\hat{{\varvec{\delta }}}\) is the minimizer of (14), then with probability at least \(1 - M_1{\mathrm{exp}}(-\epsilon ^2)\) the followings hold::

$$\begin{aligned} \lambda _{n_p, n_q} \ge \frac{M_2}{\sqrt{\min (n_p, n_q)}} (w(\varOmega _R)+\epsilon )) \end{aligned}$$

and for \(n_q \ge c_1 w^2 (C_r \cap S)\), with high probability, the estimate \(\hat{{\varvec{\delta }}}\) satisfies

$$\begin{aligned} \Vert \hat{{\varvec{\delta }}} - {\varvec{\delta }}^* \Vert _2 = O\left( \frac{w(\varOmega _R)}{\sqrt{\min (n_p, n_q)}}\right) \varPsi (C_r) \end{aligned}$$

Note the constants \(M_1\) and \(M_2\) listed in this theorem are not the same as the ones in Theorem 1. To apply this theorem, we need to know the bounds of \(w(\varOmega _R)\) and \(\varPsi (C_r)\) for specific R norms. These bounds have been proven in previous literatures (see, e.g. Banerjee et al. 2014). For example, if R is \(\ell _1\) norm, then \(\varPsi (C_r) \le 4\sqrt{d}\) and \(w(\varOmega _R) \le c \log m\) so applying the above theorem, we have

$$\begin{aligned} \Vert \hat{{\varvec{\delta }}} - {\varvec{\delta }}^* \Vert _2 = O\left( \sqrt{\frac{d\log m}{\min (n_p, n_q)}}\right) . \end{aligned}$$

Remark

Although this bound does not directly prove the support consistency, we can learn that sample complexity \(\min (n_p, n_q) = \varOmega ( d\log m)\) guarantees the convergence of estimation error in \(\ell _2\) norm. As to \(n_q\), it should also satisfy \(n_q \ge c_1 w^2(C_r\cap S)\), which is again \(n_q = \varOmega ( d \log m )\) in the case of \(\ell _1\) norm. This sample complexity is milder than what Liu et al. have obtained in the previous section \(\varOmega (d^2 \log (m^2 + m)/2)\) and \(n_q = \varOmega (n_p^2)\). Nonetheless, both theories can be applied to high-dimensional regime \(m\gg \min (n_p, n_q)\).

4.5 Support consistency of covariance-precision matching (Zhao et al. 2014)

In this section, we introduce the support recovery theorem of the Covariance-Precision Matching method (9) in terms of support consistency on Gaussian MNs. Specifically for Gaussian MNs, we need a slightly different set of notations, as they are parametrized in matrix forms. \(\varSigma _{j,k}^{(p)}\) is the jk-th elements of matrix \({\varvec{\varSigma }}^{(p)}_{j,k}\) and \(\varSigma _{\mathrm{max}}^{(p)}\) is \(\max _j \varSigma ^{(p)}_{j,j}\).

The first assumption is to ensure that the “amount of change” is fixed and the change is always sparse, and does not grow with the number of dimension m.

Assumption 6

The difference matrix \({\varvec{\varDelta }}\) has \(d \le m\) non-zero elements in its upper triangular sub-matrix. \(|{\varvec{\varDelta }}|_1 \le M_0\), and both d and \(M_0\) do not depend on dimension m.

The second assumption assures that the covariates are not strongly dependent if there are many changes in the precision matrix. This is similar to the incoherence assumption used in Assumption 2.

Assumption 7

The constants \(\mu ^{(p)} = \max _{j\ne k} |\varSigma _{j,k}^{(p)}|\) and \(\mu ^{(q)} = \max _{j\ne k} |\varSigma _{j,k}^{(q)}|\) must satisfy \(\mu = 4\max (\mu ^{(p)}\varSigma _{{\mathrm {max}}}^{(q)}, \mu ^{(q)}\varSigma _{{\mathrm {max}}}^{(p)}) \le \frac{\varSigma ^S_{{\mathrm {min}}}}{2d} \), where

$$\begin{aligned} \varSigma ^S_{{\mathrm {min}}} = {\mathrm {min}}_{j,k}\left( \varSigma ^{(q)}_{jj}\varSigma ^{(p)}_{jj}, \varSigma ^{(q)}_{kk}\varSigma ^{(p)}_{jj} + 2\varSigma ^{(q)}_{kj}\varSigma ^{(p)}_{jk} + \varSigma ^{(q)}_{jj}\varSigma ^{(p)}_{kk}\right) . \end{aligned}$$

We first intuitively explain how the proof works. The proof of the support consistency can be thought as controlling \(\Vert \hat{{\varvec{\varDelta }}} - {\varvec{\varDelta }}^*\Vert _\infty \). Clearly, for the population covariance matrices \({\varvec{\varSigma }}^{(p)}\) and \({\varvec{\varSigma }}^{(q)},\) \({\varvec{\varSigma }}^{(p)}{\varvec{\varDelta }}^*{\varvec{\varSigma }}^{(q)} + {\varvec{\varSigma }}^{(p)} - {\varvec{\varSigma }}^{(q)} = {\varvec{0}}\). If we replace the above population covariances with their sample versions, we can expect \(\Vert \hat{{\varvec{\varSigma }}}^{(p)}{\varvec{\varDelta }}^*\hat{{\varvec{\varSigma }}}^{(q)} + \hat{{\varvec{\varSigma }}}^{(p)} - \hat{{\varvec{\varSigma }}}^{(q)}\Vert _\infty \le \epsilon ,\) if the number of samples is large enough. Furthermore, \(\epsilon \) can be a function decreasing with \(\min (n_p, n_q)\) as the estimated covariances are getting closer and closer to the population ones.

Therefore, if we set the \(\epsilon \) to a decreasing function, we can still “contain” the optimal parameter \({\varvec{\varDelta }}^*\) in the feasible zone with high probability. By definition, the estimated difference \(\hat{{\varvec{\varDelta }}}\) should also be in the feasible zone; thus they should not be far off, if the zone is small enough. The rigorous proof of the above statements is given in the Appendix of Zhao et al. (2014).

Now, we give the support recovery theoremFootnote 3 as follows (see Section 4 in Zhao et al. 2014 for details):

Theorem 3

(Support consistency of covariance-precision matching) Suppose P and Q are Gaussian, Assumptions 6 and 7 hold, \({\mathrm {min}}(n_p, n_q) \ge \log m\) and

$$\begin{aligned} \tau _{n_p, n_q} = \varOmega \left( \sqrt{\frac{\log m}{\min (n_p, n_q)}}\right) , \epsilon _{n_p, n_q} = M_1 \cdot \sqrt{\frac{\log m}{\min (n_p, n_q)}} \end{aligned}$$

and \(\min _{j,k} |\varDelta ^*_{j,k: \varDelta ^*_{j,k} \ne 0}| \ge 2\tau _{n_p, n_q}\),Footnote 4 then with high probability, (9) can recover the correct support of \({\varvec{\varDelta }}^*\).

This support consistency theorem, although only applies to Gaussian MNs, has similar structure to the one derived for KLIEP (Sect. 4.3). First, they both assume the true non-zero parameter should be large enough. Second, they both assume the sparsity inducing factor (\(\lambda _{n_p, n_q}\) and \(\tau _{n_p, n_q}\)) should decay as the sample size \(\min (n_p, n_q)\) increases, while increase as the log-dimension \(\log m\) increases.

4.6 Summary and discussion

Now, we summarize and compare these theoretical results. First we discuss the similarities of these theorems.

  • None of the above proofs require the sparsity assumption on each individual MN. Thus in theory, all methods should work well when individual MNs are dense.

  • The efficiency of all methods are affected by the sparsity of changes (i.e. d). This make sense since the sparsity assumption is made on the changes between two MNs.

  • All theorems apply to the high-dimensional regime (\(m \gg \min (n_p, n_q)\)). None requires \(n_p\) or \(n_q\) to be comparable to the dimensionality m.

However, there is one important difference among these theorems. The sample complexities introduced in Sects. 4.3 and 4.4 are not symmetric; the sample complexity of \(n_q\) is more restrictive comparing to that of \(n_p\). This is understandable since KLIEP itself is an asymmetric method (KL divergence is asymmetric). In comparison, the sample complexity of Covariance-Precision Matching is symmetric, i.e., the theorem does not show the “bias” toward either of the datasets. Thus, if one has perfectly balanced Gaussian datasets, it might be natural to use Covariance-Precision Matching to learn the differences.

5 Experiments

In this section, we compare the performance of two direct change detection methods: KLIEP and Covariance-Precision (CP) Matching using synthetic and real-world examples.

5.1 Implementations

Sparsity-inducing KLIEP can be implemented using sub-gradient descent approach. The MATLAB®code can be found at http://www.ism.ac.jp/~liu/kliep_sparse/demo_sparse.html.

The R (R Core Team 2016) implementation of CP matching using ADMM can be obtained at https://github.com/sdzhao/dpm.

5.2 Synthetic examples

Illustrative example Now we illustrate the performance of both KLIEP and CP matching using two 50-dimensional multivariate zero-mean Gaussian distributions. First, we randomly generate a \(50 \times 50\) symmetric adjacency matrix \({\varvec{A}}^{(P)}\) with 10% connectivity and draw 500 samples from a Gaussian distribution with the following precision matrix:

$$\begin{aligned} \varTheta ^{(P)}_{i,j} = {\left\{ \begin{array}{ll} 2 &{} i = j\\ 0.4 &{} A^{(P)}_{i,j} \ne 0,\ i\ne j \end{array}\right. } \end{aligned}$$
(15)

Then we remove six edges randomly from it, resulting a change pattern shown in Fig. 1a and use it as \({\varvec{A}}^{(Q)}\). Following the same step above we construct \({\varvec{\varTheta }^{(q)}}\) and generate 500 samples again. As it was suggested by Theorem 1, we set \(\lambda =\frac{\alpha \log 50}{500}\), and the learned \(\hat{{\varvec{\varDelta }}}\) Footnote 5 are shown in Fig. 1c–e using different \(\alpha \).

Fig. 1
figure 1

Illustrative experiments

The same experiments are repeated using the CP matching method. However, since the sparsity control of CP matching is via the selection of the threshold \(\tau \), we set \(\epsilon = 0.2\) which shows good performance empirically and plot the learned \(\hat{{\varvec{\varDelta }}}\) using different thresholds. Results are shown in Fig. 1f–h.

As we can see, both approaches recover the change pattern well as we increase the sparsity control parameter.

ROC-curves In this experiment, we compare two methods quantitatively using ROC curves. We adopt the True Positive (TP) and True Negative (TN) rate as described in Zhao et al. (2014):

$$\begin{aligned} {\mathrm{TPR}} = \frac{\sum _{t'\in S} \delta (\hat{{\varvec{\delta }}}_{ t'} \ne {\varvec{0}})}{\sum _{t'\in S} \delta ({\varvec{\delta }}^*_{t'} \ne {\varvec{0}})},\quad {\mathrm {TNR}} = \frac{\sum _{t'' \in S^c} \delta (\hat{{\varvec{\delta }}}_{t''} = {\varvec{0}})}{\sum _{t''\in S^c} \delta ({\varvec{\delta }}^*_{t''} = {\varvec{0}})}. \end{aligned}$$

We generate an adjacency matrix \(A^{(P)}\) with four-neighbour lattice structure and randomly remove \(d = \sqrt{m}\) edges producing \({\varvec{A}}^{(Q)}\). Two sets of \(n_p = n_q = 50\) samples are generated using the same criteria mentioned in (15). The ROC curves averaged over 50 trials with different dimensions are shown in Fig. 1b, and the AUCs are reported in Table 1.

It can be seen that as the number of both dimension and changed edges increases, KLIEP method can retain stable performance while the performance of CP approach decays rapidly.

Table 1 The area under the curve (AUC) of ROC plot in Fig. 1b (“K” for KLIEP and “CP” for CP matching)

5.3 Running time

Although the rigorous timing comparison is difficult due to the different implementations of KLIEP and CP matching, from our experience, KLIEP is faster but more memory-consuming as our implementation stores the entire parameter vector into the memory. On a server with 16 Xeon cores, it takes KLIEP about 15 min to run experiments needed for plotting Fig. 1b, while it takes CP matching roughly 1 h.

As to KLIEP, we also observe that “early stopping” heuristics (e.g., stopping at 100 iterations) can provide an accurate non-zero pattern within a short period of time.

5.4 Image difference detection

Fig. 2
figure 2

Detecting changes of parking patterns from two photos

Two photos were taken in a rainy afternoon using a camera pointing at the parking lot of The Institute of Statistical Mathematics (ISM). In this task, we are interested in learning the changes of the parking patterns marked by green boxes in Fig. 2(b). As we can see from Fig. 2a, b, the light conditions and positions of raindrops vary in two pictures.

To construct samples, we use windows of pixels (Fig. 2c). Each window is a dimension of a dataset, and the samples are the pixel RGB values within this window. By sliding the window across the entire picture, we may obtain samples of different dimensions. Two sets of data can be obtained by using this sample generating mechanism over two images.

Assuming an image can be represented by an MN of windows, changes of pixels values within a window may cause changes of “interactions” between neighbouring windows. In other words, we can discover a change by looking at the change of the dependency of pixel values between a certain window and its neighbours. This is more advantageous than simply looking at the pixel values since changing the brightness of a picture may increase the pixel values in many windows simultaneously, even if the “contrast” between two windows does not change by much.

By applying KLIEP on such two sets of data and highlighting adjacent window pairs that are involved in the changes of pairwise interactions, we may spot changes between two images. In our experiment, we use sliding windows of size \(16 \times 16\) on a \(200\times 150\) image, generating two sets of samples with \(m=999\) and \(n_p=n_q=256\). We reduce \(\lambda \) until \(|{\hat{S}}| > 40\). The spotted changes were plotted in Fig. 2d. It can be seen that KLIEP has correctly labelled almost all changed parkings between two images except one missing on the left.

Note that here we set \(\psi ({\varvec{x}}_u, {\varvec{x}}_v) = \exp (-\frac{\Vert {\varvec{x}}_u - {\varvec{x}}_v\Vert ^2}{0.5})\), and the underlying MN is highly non-Gaussian so CP matching cannot be applied here.

6 Open problems

Although pioneering works have been conducted in this area, there are still important unsolved open problems. In this section, we list a few examples.

Generalized covariance-precision matching In Sect. 3.3, we introduced an equality between Gaussian covariance and precision matrix (4). This leads to a direct sparse change learning approach. However, it does not apply to more general pairwise MNs. A natural question is, can we extend this relationship between covariance and precision matrices to a more general principle? Particularly, in a recent work (Loh and Wainwright 2013), the generalized covariance matrix was used to learn a non-Gaussian graphical model structure. Would a generalized equality (4) provide us with a universal framework of learning changes between MNs?

Learning changes from multiple MNs In this paper, we have only reviewed the algorithms that learn changes between two MNs. In fact, in some applications, datasets may be obtained as multiple “snapshots”. For example, gene activities may be measured at a few different time points. Under the same assumption that changes between adjacent time points are “mild” and “sparse”, can we perform multiple change detections in one shot?

Asymmetry versus symmetry As we have pointed out in Sect. 4.6, there exists an asymmetry in KLIEP while Covariance-Precision matching has a symmetric formulation. An interesting future direction is to systematically investigate how such an asymmetry affects the change detection results, and more importantly, how can we automatically determine which density to be Q and which one to be P in the ratio formulation.

We believe thorough investigations in these three directions will significantly expand our knowledge over the domain of learning changes between MNs in the future.

7 Conclusion

In this paper, we have reviewed an MN change learning method based on density ratio estimation and other alternative approaches. Statistical guarantees regarding the support recovery and \(\ell _2\) consistency were also reported and compared. Through their direct modelling and theoretical results, we can see an interesting common pattern in all these methods: they work well regardless of the difficulty of learning individual MNs.

These results are inspiring as they shed lights on a new family of methods that only learn the incremental patterns. They show that if the change itself is simple enough, even with limited amount of information, we can have good learning performance. Compared to classic, static pattern recognition, such methods are well-suited for analysing dynamic datasets, where the “absolute” pattern is not the main interest, but learning the change itself is more valuable.

These works have offered a new vision of research on learning changes between patterns. We believe these methods and theorems may have many potential applications in the years to come.