Learning sparse structural changes in high-dimensional Markov networks

Recent years have seen an increasing popularity of learning the sparse changes in Markov Networks. Changes in the structure of Markov Networks reflect alternations of interactions between random variables under different regimes and provide insights into the underlying system. While each individual network structure can be complicated and difficult to learn, the overall change from one network to another can be simple. This intuition gave birth to an approach that directly learns the sparse changes without modelling and learning the individual (possibly dense) networks. In this paper, we review such a direct learning method with some latest developments along this line of research.


Introduction
The problem of learning the changes of interactions between random variables can be useful in many applications.For example, genes may regulate each other in different ways when external conditions are changed; the number of daily flu-like symptom reports in nearby hospitals may become correlated when a major epidemic disease breaks out; EEG signals from different regions of the brain may be synchronized/desynchronized when the subject is performing different activities.Spotting such changes in interactions may provide key insights into the underlying system.
The interactions among random variables can be formulated as undirected probabilistic graphical models, or Markov Networks (MNs) [Koller and Friedman, 2009], expressing the interactions via the conditional independence.We consider a simple model: the pairwise MNs where the links are only encoded for single or pairs of random variables.Due to the Hammersley-Clifford theorem [Hammersley and Clifford, 1971], the underlying joint probability density function can be represented as the product of univariate and bivariate factors.
As an important challenge, structure learning of MNs has also attracted a significant amount of attention.Earlier methods [Spirtes et al., 2000] use hypothesis testing to learn the conditional independence among random variables, which reflects the absences of edges.It is proved that such a problem is generally NP-hard [Chickering, 1996].Methods restricted to a sub-class of graphical models (such as trees or forests) [Chow and Liu, 1968, Geman and Geman, 1984, Liu et al., 2011] also suffer from growing computational cost.
However, the Hammersley-Clifford theorem together with the recent breakthrough on sparsity-inducing methods [Tibshirani, 1996, Zhao and Yu, 2006, Wainwright, 2009] gave birth to many sparse structure learning ideas where the sparse factorization of the joint/conditional density function was estimated to infer the underlying structure of the MN [Friedman et al., 2008, Banerjee et al., 2008, Meinshausen and Bühlmann, 2006, Ravikumar et al., 2010].Although most works focused on parametric models, the structure learning has been conducted on semi-parametric ones in recent years.[Liu et al., 2009[Liu et al., , 2012]].
There is also a trend of learning the changes between MNs [Zhang and Wang, 2010, Liu et al., 2014, Zhao et al., 2014].Comparing to standard structure learning, the learning of changes views the problem in a more dynamic fashion: Instead of estimating a static pattern, we hope to obtain a dynamic one, namely "the change" by comparing two sets of data.Since in some applications, the static pattern may not be computationally tractable, or simply too hard to comprehend.However, the difference between two patterns may be represented by some simple incremental effects involving only a small number of nodes or bonds, Thus it takes much less effort to learn and understand.
One of the main uses of structural change learning is to spot responding variables in "controlled experiments" [Zhang and Wang, 2010] where some key external factors of the experiments are altered, and two sets of samples are obtained.By discovering the changes in the MNs, we can see how random variables have responded to the change of the external stimuli.
In this paper, we firstly review a recently proposed method of structural change learning between MNs [Liu et al., 2014].This follows a simple idea: if the MNs are products of the pairwise factors, the ratio of two MNs must also be proportional to the ratios of those factors.Moreover, factors that do not change between two MNs will have no contribution to the ratio.This naturally suggests the idea of modelling the changes between two MNs P and Q as the ratio between two MN density functions p(x) and q(x).The ratio p(x)/q(x) is directly estimated from a one-shot estimation [Sugiyama et al., 2012].This density-ratio approach can work well even when each MN is dense (as long as the change is sparse).
We also present some very recent theoretical results along this line of research.These works prove the consistency of the density ratio method in the high-dimensional setting.The support consistency indicates the support of the estimated parameter converges to the support of the true parameter in probability.This is an important property for sparsity inducing methods.It is shown that under certain conditions the density ratio method recovers the correct parameter sparsity with high probability [Liu et al., 2017b].Moreover, Fazayeli and Banerjee introduced a theorem for the regularized density ratio estimator showing the estimation error, i.e., the 2 distance between the estimated parameter and the true parameter converges to zero under milder conditions.
As comparisons, we will also show a few alternative approaches to the change detection problem between MNs.The differential graphical model learning approach [Zhao et al., 2014] uses a covariance-precision matrix equality to learn changes without going through the learning of the individual MNs.The "jumping" MNs [Kolar and Xing, 2012] setting considers a scenario where the observations are received as a sequence and multiple sub-sequences are generated via different parametrizations of MN.
We organize this paper as follows: Firstly, we introduce the problem formulation of learning changes between MNs in Section 2. Secondly, the density ratio approach and two other alternatives are explained in Section 3. Section 4 reviews the theoretical results of these approaches.Synthetic and real-world experiments are conducted in Section 5 to compare the performance of methods.Finally, in Section 6 and 7, we give a few possible future directions and conclude the current developments along this line of research.

Formulating Changes
In this section, we focus on formulating the change of MNs using density ratio.At the end of this section, a few alternatives are also introduced.

Structural Changes by Parametric Differences
Detecting changes naturally involves two sets of data.Consider independent samples drawn separately from two probability distributions P and Q on R m : We assume that P and Q belong to the family of Markov networks (MNs) consisting of univariate and bivariate factors, i.e., their respective probability densities p and q are expressed as where x = (x 1 , . . ., x m ) is the m-dimensional random variable, denotes the transpose, θ (p)  u,v is the b-dimensional parameter vector for the pair (x u , x v ), and is the entire parameter vector.The feature function ψ u,v (x u , x v ) is a bivariate vector-valued basis function, and Z(θ (p) ) is the normalization factor defined as q(x; θ (q) ) is defined in the same way.Such a parametrization is generic when representing pairwise graphical models.
Directly estimating an MN in this generic form is challenging since Z(θ (p) ) usually does not have a closed form except for a few special cases (e.g.Gaussian distribution).Markov Chain Monte Carlo [Robert and Casella, 2005] is used to approximate such an integral.However, this would bring extra approximation errors.
Nonetheless, we can define changes between two MNs as the difference between their parameters.Therefore, given two parametric models p(x; θ (p) ) and q(x; θ (q) ), we hope to discover changes in parameters from P to Q, i.e., δ = θ (p) − θ (q) .Note that by its definition, the changes are continuous.This is more advantageous than only considering discrete changes of the MN structure, since a weak change of interaction does not necessarily shatter or flip the bond between two random variables.

Density Ratio Modelling
An important observation is that although two MNs may be complex individually, their changes might be "simple" since many terms may be cancelled while taking the difference, i.e. θ (p)  u,v − θ (q) u,v might be zero.The key idea in [Liu et al., 2014] is to consider the ratio of p and q: p(x; θ (p) ) q(x; where θ (p) u,v −θ (q) u,v encodes the difference between P and Q for factor ψ u,v (x u , x v ), i.e., θ (p) u,v −θ (q)   u,v is zero if there is no change in the factor ψ u,v (x u , x v ).
Once the ratio of p and q is considered, each parameter θ (p) u,v and θ (q) u,v does not have to be estimated.Their difference δ u,v = θ (p)  u,v − θ (q) u,v is sufficient for change detection, as x only interacts with such a parametric difference in the ratio model.Thus, in this density-ratio formulation, p and q are no longer modelled separately, but directly as where N (δ) is the normalization term.This direct formulation also halves the number of parameters from both θ (p) and θ (q) to only δ.
The normalization term N (δ) is chosen to fulfill q(x)r(x; δ)dx = 1: which is the expectation over q(x) 1 .Note this integral is with respect to a true distribution where our samples are generated 2 .This expectation form of the normalization term is another notable advantage of the density-ratio formulation because it can be easily approximated by the sample average over {x Thus, one can always use this empirical normalization term for any (non-Gaussian) models p(x; θ (p) ) and q(x; θ (q) ).Interestingly, if one uses ψ u,v (x u x v ) = x u x v in the ratio model, it does not mean one assumes two individual MNs are Gaussian or Ising, it simply means we assume the changes of interactions are linear while other non-linear interactions remain unchanged.This formulation allows us to consider highly complicated MNs as long as their changes are "simple".
Throughout the rest of the paper, we simplify the notation from ψ u,v to ψ by assuming the feature functions are the same for all pairs of random variables.

Quasi Log-likelihood Equality
Density ratio is not the only direct modelling approach.Particularly for Gaussian MNs, where two distributions are parametrized as p(x; Θ (p) ) and p(x; Θ (q) ) with the precision 1 If one models the ratio q(x) p(x) , the normalization should be used.
2 q(x) should not be confused with q(x; θ).
matrix Θ, one alternative was proposed using the following equality [Zhao et al., 2014]: where Σ (p) is the covariance matrix of the Gaussian distribution p.As we replace the covariance matrices Σ (p) and Σ (q) with their sample versions and Σ (q) , it can be seen that Θ (p) − Θ (q) is the only variable interacting with the data.Therefore, one may replace it with a single parameter ∆ and later minimize the sample version of (4) (See Section 3.3 for details).
This direct formulation specifically uses a property of Gaussian MN that the covariance matrix computed from the data and the precision matrix that encodes the MN structure should approximately cancel each other when multiplied.However, such a relationship does not hold for other distributions in general.Studies on the generality of this equality is an interesting open question (See Section 6).
Remark In fact, it is not necessary to combine θ (p) − θ (q) in (2) (or Θ (p) − Θ (q) in ( 4)) into one parameter.However, such a model will be unidentifiable since there are too many combinations of Θ (p) and Θ (q) can produce the same difference.Nonetheless, such an indirect modelling may still be useful when the individual structures of the MNs are also our interests.We review an example of such indirect modelling in Section 3.4.

Density Ratio Estimation
Density ratio estimation has been recently introduced to the machine learning community and is proven to be useful in a wide range of applications [Sugiyama et al., 2012].In [Liu et al., 2014], a density ratio estimator called the Kullback-Leibler importance estimation procedure (KLIEP) for log-linear models [Sugiyama et al., 2008, Tsuboi et al., 2009] was employed in learning structural changes.
Here we define as the empirical density ratio model.In practice, one minimizes the negative empirical approximation of the rightmost term in Eq.( 5): Optimization Since KLIEP (δ) consists of a linear part and a log-sum-exp function [Boyd and Vandenberghe, 2004], it is convex with respect to δ, and its global minimizer can be numerically found by standard optimization techniques such as gradient descent.The gradient of KLIEP with respect to δ u,v is given by that can be computed in a straightforward manner for any feature vector ψ(x u , x v ).

Sparsity Inducing and Regularizations
In the search for sparse changes, one may regularize the KLIEP solution with a sparsityinducing norm u≥v δ u,v , i.e., the group-lasso penalty [Yuan and Lin, 2006] where we use • to denote the 2 norm.Note that the density-ratio approach [Liu et al., 2014] directly sparsifies the difference θ (p) − θ (q) , and thus intuitively this method can still work well even if θ (p) and θ (q) are dense as long as θ (p) − θ (q) is sparse.The following is the objective function used in [Liu et al., 2014]: In a recent work [Fazayeli and Banerjee, 2016], authors considered structured changes, such as sparse, block sparse, node-perturbed sparse and so on.These structured changes can be represented via suitable atomic norms [Chandrasekaran et al., 2012, Mohan et al., 2014].For example, a KLIEP objective with a node-perturbation regularizer is ∆ = argmin Such a regularization can be used to discover perturbed nodes i.e., nodes that have a completely different connectivity pattern to other nodes among two networks.
Optimization Although the original objective of KLIEP was smooth and convex, the sparsity inducing norms are in general non-smooth.Proximal gradient methods, such as Fast Iterative Shrinkage Thresholding Algorithms (FISTA) [Beck and Teboulle, 2009] can be utilized to solve regularized KLIEP objectives.A FISTA-like algorithm was proposed in [Fazayeli and Banerjee, 2016] with a faster rate of convergence.

Covariance-Precision Matching
As mentioned above, the density ratio formulation is not the only way that may motivate the direct modelling.For the formulation using the equality (4), we can solve the following sparsity inducing objective which was introduced in [Zhao et al., 2014].
where is a hyper-parameter.To obtain a sparse solution, we set a threshold for the solution at a certain level τ , i.e. the value for | ∆u,v | < τ is rounded to 0. The constraint enforces the equality (4) and we used single parameter ∆ replacing Θ (p) − Θ (q) .Optimization This method is quite computationally demanding as the dimension m grows.The Alternating Direction Method of Multipliers (ADMM) procedure [Boyd et al., 2011] was implemented based on an augmented version of (9) (See Section 3.3 [Zhao et al., 2014] for details).

Maximizing Joint Likelihood
As it was mentioned in Section 2.3, one does not have to use the direct modelling to learn sparse changes between MNs.In fact, separated modelling may not only discover changes, but also can recover the individual MN themselves.Recently, a method based on fused-lasso [Tibshirani et al., 2005] has been developed [Zhang and Wang, 2010].This method also sparsifies θ (p) − θ (q) directly.
The original method conducts feature-wise neighborhood regression [Meinshausen and Bühlmann, 2006] jointly for P and Q, which can be conceptually understood as maximizing the local conditional Gaussian likelihood jointly on each random variable t.A slightly more general form of the learning criterion may be summarized as min where t (θ; X p ) is the negative log conditional likelihood for the t-th random variable where each dimension of θ corresponds to one of its potential neighborhood.t (θ; X q ) is defined in the same way as t (θ; X p ).
Since the Fused-lasso-based method directly sparsifies the changes in MN structure, it can work well even when each MN is not sparse (when λ 1 is set to 0).
Learning Changes in Sequence Another recent development [Kolar and Xing, 2012] along this line of research assumes the data points are received sequentially, i.e., we observe x (1) , x (2) , . . ., x (T ) over time points T = {1, 2, . . ., T }.Suppose T can be segmented into K disjoint unknown subsets: T = ∪ k∈{1...K} T k and x T k ∼ p x, θ (T k ) .The task is to segment such a sequence and learn an estimate θ for each segment.We can extend the idea of fused-lasso in (10), and maximize the joint likelihood over each single observation: argmin where the fused lasso term sparsifies the changes between MNs at adjacency time points, thus the learned θ (1) , θ (2) , . . .θ (T ) is "piecewise-constant" and the segments are automatically determined from it.A block-coordinate descent procedure was proposed to solve this problem efficiently [Kolar et al., 2010].

Theoretical Analysis
The KLIEP algorithm does not only perform well in practice, it is also justified theoretically.
In this section, we first introduce the support recovery theorem of KLIEP and then review some recent theoretical developments of direct change learning.

Preliminaries
In the previous section, a sub-vector of δ indexed by a pair (u, v) corresponds to a specific edge of an MN.From now on, we switch to a "unitary" index system as our analysis is not dependent on the edge nor the structure setting of the graph.
We introduce the "true parameter" notation δ * , p(x) = q(x)r(x; δ * ), and the pairwise index set E = {(u, v)|u ≥ v}.Two sets of sub-vector indices regarding to δ * and E are defined as In this section, we prove the support consistency, i.e. with high probability that S = Ŝ, S c = Ŝc (See e.g., Chapter 11 in [Hastie et al., 2015] for an introduction of support consistency).

Assumptions
We try not to impose assumptions directly on each individual MNs, as the essence of KLIEP method is that it can handle various changes regardless the types of individual MNs.
The first two assumptions are essential to many support consistency theorems(e.g.Eq. ( 15) and ( 16) in [Wainwright, 2009], Assumption A1 and A2 in [Ravikumar et al., 2010]).These assumptions are made on the Fisher information matrix.
Assumption 1 (Dependency Assumption) The sample Fisher information submatrix I SS has bounded eigenvalues: Λ min (I SS ) ≥ λ min > 0, with probability 1 − ξ q , where Λ min is the minimum-eigenvalue operator of a symmetric matrix.
This assumption on the submatrix of I is to ensure that the density ratio model is identifiable and the objective function is "reasonably convex".
This assumption says the unchanged edges cannot exert overly strong effects on changed edges.Note this assumption is sometimes called "irrepresentability" condition.
Assumption 3 (Smoothness Assumption on Likelihood Ratio) The log-likelihood ratio KLIEP (δ) is smooth around its optimal value, i.e., it has bounded derivatives with probability 1.
• , |||•||| are the spectral norms of a matrix and a tensor respectively (See e.g., [Tomioka and Suzuki, 2014] for the definition of spectral norm of a tensor).This assumption guarantees the log-likelihood function is well-behaved.Now, we state the following assumptions on the density ratio: Assumption 4 (Correct Model Assumption) The density ratio model is correct, i.e. there exists δ * such that p(x) = r(x; δ * )q(x).
Although analyzing the mis-specified ratio model [Kanamori et al., 2010] is certainly an interesting open question, we focus on correctly specified models in this section.
Assumption 5 (Smooth Density Ratio Assumption) For any vector u ∈ R dim(δ * ) such that u ≤ δ * and every a ∈ R, the following inequality holds: where M > 0 is a constant independent from m.This assumption states that the density ratio model, around its optimal parameter, should not often obtain large values over samples from Q. [Liu et al., 2017b,a] Theorem 1 Suppose that Assumptions 1, 2, 3, 4, and 5 as well as

Successful Support Recovery of KLIEP
are satisfied, where d is the number of changed edges defined as d = |S|, i.e., the cardinality of the set of non-zero parameter groups.Suppose also that the regularization parameter is chosen so that and n q ≥ M 3 n 2 p , where M 1 , M 2 and M 3 are constants.Then there exist some constants L 1 , K 1 , and K 2 such that if n p ≥ L 1 d 2 log m 2 +m 2 , with the probability at least the following properties hold: • Unique Solution: The solution of (11) is unique.
The proof of this theorem follows the Primal-dual witness construction (See e.g., Section 11.4.2 in Hastie et al. [2015]).
Remark The main conclusion of this theorem states that if the regularization parameter is reasonably chosen (13) and the true non-zero groups δ * t , t ∈ S is large enough (12), with high probability, we are guaranteed to have the correct support of parameters.The samples needed for n p only grows linearly with log m and n q grows quadratically with n p .

4.4
2 Consistency of KLIEP with Atomic Norm [Fazayeli and Banerjee, 2016] As it was introduced in Section 3.2, atomic norms can be used to learn changes with special topological structures.Instead of support recovery, we focus on the 2 loss between the estimated parameter δ and the true parameter δ * , i.e., δ * − δ .First, we generalize our objective function as where R is an atomic norm function.Such a theorem relies on the Restricted Strong Convex (RSC) property on the Error Set of the objective function.Intuitively, if KLIEP (δ) is "highly curved", small | KLIEP ( δ) − KLIEP (δ * )| ensures small δ − δ * .Thus we only need to figure out how | KLIEP ( δ) − KLIEP (δ * )| reaches zero as number of samples goes to infinity and this is a more accessible target.
To make sure our objective has such a "strongly convex" curvature, one need to impose a uniform lower-bound on the eigenvalues of the objective Hessian (a.k.a., sample Fisher information matrix I).However, this is not realistic for the high-dimensional setting, as I is certainly rank-deficient.As an alternative, we impose an assumption on the convexity of KLIEP over a constrained set: Banerjee et al. [2014]) on the 2 estimation error where Ψ(C) is the the norm compatibility constant [Negahban et al., 2009] and can be easily bounded.Note that although this bound itself is not probabilistic, the parameter λ np,nq is random and the RSC may hold with a probability.One can infer the sample complexity from these bounds.Two things remain to be shown.First, we need to find such a cone which contains δ − δ * .Second, we need to prove KLIEP is RSC on this cone.We start with the first problem.
Error Set (Lemma 1 in [Banerjee et al., 2014]) For any convex loss (δ), if λ np,nq is large enough, i.e., λ np,nq ≥ βR * (∇ (δ * )), β > 1 where R * is the dual norm of R, it can be proven that the estimation error u = δ * − δ lies in an Error Set: where dom(y) is the domain of y.Let's define C r = cone(E r ).
As to the second problem, it can be proven that KLIEP is RSC at C r with high probability once n q ≥ n 0 , n 0 = w 2 (C r ∩ S), where S is a unit hypersphere (Theorem 2 in [Fazayeli and Banerjee, 2016]).Thus n 0 is the minimum number of samples required from Q to be able to apply this theorem.
Putting everything together, we have the main theorem proved in [Fazayeli and Banerjee, 2016]: Theorem 2 ( 2 Consistency of Atomic Norms) If Assumption 5 holds, and δ is the minimizer of (14), then with probability at least 1 − M 1 exp(− 2 ) the followings hold: and for n q ≥ c 1 w 2 (C r ∩ S), with high probability, the estimate δ satisfies Note the constants M 1 and M 2 listed in this theorem are not the same as the ones in Theorem 1.To apply this theorem, we need to know the bounds of w(Ω R ) and Ψ(C r ) for specific R norms.These bounds have been proven in previous literatures (see e.g.[Banerjee et al., 2014]).For example, if R is 1 norm, then Ψ(C r ) ≤ 4 √ d and w(Ω R ) ≤ c log m so applying the above theorem, we have Remark Although this bound does not directly prove the support consistency, we can learn that sample complexity min(n p , n q ) = Ω(d log m) guarantees the convergence of estimation error in 2 norm.As to n q , it should also satisfy n q ≥ c 1 w 2 (C r ∩ S), which is again n q = Ω(d log m) in the case of 1 norm.This sample complexity is milder than what Liu et al. have obtained in the previous section Ω(d 2 log(m 2 + m)/2) and n q = Ω(n 2 p ). Nonetheless, both theories can be applied to high dimensional regime m min(n p , n q ).
4.5 Support Consistency of Covariance-Precision Matching [Zhao et al., 2014] In this section, we introduce the support recovery theorem of the Covariance-Precision Matching method (9) in terms of support consistency on Gaussian MNs.Specifically for Gaussian MNs, we need a slightly different set of notations, as they are parametrized in matrix forms.Σ . The first assumption is to ensure that the "amount of change" is fixed and the change is always sparse, and does not grow with the number of dimension m.
Assumption 6 The difference matrix ∆ has d ≤ m non-zero elements in its upper triangular sub-matrix.|∆| 1 ≤ M 0 , and both d and M 0 does not depend on dimension m.
The second assumption assures that the covariates are not strongly dependent if there are many changes in the precision matrix.This is similar to the incoherence assumption used in Assumption 2.
, where We first intuitively explain how the proof works.The proof of the support consistency can be thought as controlling ∆ − ∆ * ∞ .Clearly, for the population covariance matrices Σ (p) and Σ (q) , Σ (p) ∆ * Σ (q) + Σ (p) − Σ (q) = 0.If we replace the above population covariances with their sample versions, we can expect Σ(p) ∞ ≤ , if number of samples is large enough.Furthermore, can be a function decreasing with min(n p , n q ) as the estimated covariances are getting closer and closer to the population ones.
Therefore, if we set the to a decreasing function, we can still "contain" the optimal parameter ∆ * in the feasible zone with high probability.By definition, the estimated difference ∆ should also be in the feasible zone, thus they should not be far off, if the zone is small enough.The rigorous proof of the above statements is given in the Appendix of Zhao et al. [2014].Now, we give the support recovery theorem3 as follows (See Section 4 in [Zhao et al., 2014] for details): Theorem 3 (Support Consistency of Covariance-Precision Matching) Suppose P and Q are Gaussian, Assumption 6 and 7 hold, min(n p , n q ) ≥ log m and 4 , then with high probability, (9) can recover the correct support of ∆ * .
This support consistency theorem, although only applies to Gaussian MNs, has similar structure to the one derived for KLIEP (Section 4.3).First, they both assume the true non-zero parameter should be large enough.Second, they both assume the sparsity inducing factor (λ np,nq and τ np,nq ) should decay as the sample size min(n p , n q ) increases, while increase as the log-dimension log m increases.

Summary and Discussion
Now, we summarize and compare these theoretical results.First we discuss the similarities of these theorems.
• None of the above proofs require the sparsity assumption on each individual MN.Thus in theory, all methods should work well when individual MNs are dense.
• The efficiency of all methods are affected by the sparsity of changes (i.e.d).This make sense since the sparsity assumption is made on the changes between two MNs.
• All theorems apply to the high dimensional regime (m min(n p , n q )).None requires n p or n q to be comparable to the dimensionality m.
However, there is one important difference among these theorems.The sample complexities introduced in Section 4.3 and 4.4 are not symmetric.the sample complexity of n q is more restrictive comparing to that of n p .This is understandable since KLIEP itself is an asymmetric method (KL divergence is asymmetric).In comparison, the sample complexity of Covariance-Precision Matching is symmetric, i.e., the theorem does not show the "bias" toward either of the datasets.Thus, if one has perfectly balanced Gaussian datasets, it might be natural to use Covariance-Precision Matching to learn the differences.

Experiments
In this section, we compare the performance of two direct change detection methods: KLIEP and Covariance-Precision (CP) Matching using synthetic and real-world examples.
The R [R Core Team, 2016] implementation of CP matching using ADMM can be obtained at https://github.com/sdzhao/dpm.

Synthetic Examples
Illustrative Example Now we illustrate the performance of both KLIEP and CP matching using two 50 dimensional multivariate zero-mean Gaussian distributions.First, we randomly generate a 50 × 50 symmetric adjacency matrix A (P ) with 10% connectivity and draw 500 samples from a Gaussian distribution with the following precision matrix: Then we remove 6 edges randomly from it, resulting a change pattern shown in Figure 1(a) and use it as A (Q) .Following the same step above we construct Θ (q) and generate 500 samples again.As it was suggested by Theorem 1, we set λ = α log 50 500 , and the learned ∆5 are shown in Figure 1(c), 1(d) and 1(e) using different α.Table 1: The Area under the curve (AUC) of ROC plot in Figure 1(b) ("K" for KLIEP and "CP" for CP matching).
The same experiments are repeated using the CP matching method.However, since the sparsity control of CP matching is via the selection of the threshold τ , we set = 0.2 which shows good performance empirically and plot the learned ∆ using different thresholds.Results are shown in Figure 1(f), 1(g) and 1(h).
As we can see, both approaches recover the change pattern well as we increase the sparsity control parameter.

ROC-curves
In this experiment, we compare two methods quantitatively using ROC curves.We adopt the True Positive (TP) and True Negative (TN) rate as described in [Zhao et al., 2014]: .
We generate a adjacency matrix A (P ) with four-neighbour lattice structure, and randomly remove d = √ m edges producing A (Q) .Two sets of n p = n q = 50 samples are generated using the same criteria mentioned in (15).The ROC curves averaged over 50 trials with different dimensions are shown in Figure 1(b), and the AUCs are reported in Table 1.
It can be seen that as the number of both dimension and changed edges increases, KLIEP method can retain stable performance while the performance of CP approach decays rapidly.

Running Time
Although the rigorous timing comparison is difficult due to the different implementations of KLIEP and CP matching, from our experience, KLIEP is faster but more memory-consuming as our implementation stores the entire parameter vector into the memory.On a server with 16 Xeon cores, it takes KLIEP about 15 mins to run experiments needed for plotting Figure 1(b), while it takes CP matching roughly 1 hour.
As to KLIEP, we also observe that "early stopping" heuristics (e.g., stopping at 100 iterations) can provide an accurate non-zero pattern within a short period of time.

Image Difference Detection
Two photos were taken in a rainy afternoon using a camera pointing at the parking lot of The Institute of Statistical Mathematics (ISM).In this task, we are interested in learning the  To construct samples, we use windows of pixels (Figure 2(c)).Each window is a dimension of a dataset, and the samples are the pixel RGB values within this window.By sliding the window across the entire picture, we may obtain samples of different dimensions.Two sets of data can be obtained by using this sample generating mechanism over two images.
Assuming an image can be represented by an MN of windows, changes of pixels values within a window may cause changes of "interactions" between neighboring windows.In other words, we can discover a change by looking at the change of the dependency of pixel values between a certain window and its neighbours.This is more advantageous than simply looking at the pixel values since changing the brightness of a picture may increase the pixel values in many windows simultaneously, even if the "contrast" between two windows does not change by much.
By applying KLIEP on such two sets of data and highlighting adjacent window pairs that are involved in the changes of pairwise interactions, we may spot changes between two images.In our experiment, we use sliding windows of size 16 × 16 on a 200 × 150 image, generating two sets of samples with m = 999 and n p = n q = 256.We reduce λ until | Ŝ| > 40.
The spotted changes were plotted in Figure 2 (d).It is can be seen that KLIEP has correctly labelled almost all changed parkings between two images except one missing on the left.Note that here we set ψ(x u , x v ) = exp − xu−xv 2 0.5 , and the underlying MN is highly non-Gaussian so CP matching cannot be applied here.

Open Problems
Although pioneering works have been conducted in this area, there are still important unsolved open problems.In this section, we list a few examples.
Generalized Covariance-Precision Matching In Section 3.3, we introduced an equality between Gaussian covariance and precision matrix (4).This leads to a direct sparse change learning approach.However, it does not apply to more general pairwise MNs.A natural question is, can we extend this relationship between covariance and precision matrices to a more general principle?Particularly, in a recent work [Loh and Wainwright, 2013], the generalized covariance matrix was used to learn a non-Gaussian graphical model structure.Would a generalized equality (4) provides us with a universal framework of learning changes between MNs?
Learning Changes from Multiple MNs In this paper, we have only reviewed the algorithms that learn changes between two MNs.In fact, in some applications, datasets may be obtained as multiple "snapshots".For example, gene activities may be measured at a few different time points.Under the same assumption that changes between adjacent time points are "mild" and "sparse", can we perform multiple change detections in one shot?
Asymmetry versus Symmetry As we have pointed out in Section 4.6, there exists an asymmetry in KLIEP while Covariance-Precision matching has a symmetric formulation.An interesting future direction is to systematically investigate how such an asymmetry affects the change detection results, and more importantly, how can we automatically determine which density to be Q and which one to be P in the ratio formulation.
We believe thorough investigations in these three directions will significantly expand our knowledge over the domain of learning changes between MNs in the future.

Conclusion
In this paper, we have reviewed an MN change learning method based on density ratio estimation and other alternative approaches.Statistical guarantees regarding the support recovery and 2 consistency were also reported and compared.Through their direct modelling and theoretical results, we can see an interesting common pattern in all these methods: they work well regardless of the difficulty of learning individual MNs.
These results are inspiring as they shed lights on a new family of methods that only learn the incremental patterns.They show that if the change itself is simple enough, even with limited amount of information, we can have good learning performance.Compared to classic, static pattern recognition, such methods are well-suited for analysing dynamic datasets, where the "absolute" pattern is not the main interest, but learning the change itself is more valuable.
These works have offered a new vision of research on learning changes between patterns.We believe these methods and theorems may have many potential applications in the years to come.
define Ŝ = {t ∈ E | δt = 0} and Ŝc accordingly.Sample Fisher information matrix I ∈ R b(m 2 +m) 2 × b(m 2 +m) 2 denotes the Hessian of the loglikelihood: I = ∇ 2 KLIEP (δ * ).I AB is a sub-matrix of I indexed by two sets of indices A, B ⊆ E are indices on rows and columns.

Figure 2 :
Figure 2: Detecting changes of parking patterns from two photos.