A Wasserstein-based measure of conditional dependence

Measuring conditional dependencies among the variables of a network is of great interest to many disciplines. This paper studies some shortcomings of the existing dependency measures in detecting direct causal influences or their lack of ability for group selection to capture strong dependencies and accordingly introduces a new statistical dependency measure to overcome them. This measure is inspired by Dobrushin’s coefficients and based on the fact that there is no dependency between X and Y given another variable Z, if and only if the conditional distribution of Y given X=x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X=x$$\end{document} and Z=z\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Z=z$$\end{document} does not change when X takes another realization x′\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x'$$\end{document} while Z takes the same realization z. We show the advantages of this measure over the related measures in the literature. Moreover, we establish the connection between our measure and the integral probability metric (IPM) that helps to develop estimators of the measure with lower complexity compared to other relevant information theoretic-based measures. Finally, we show the performance of this measure through numerical simulations.


Introduction
Identifying the conditional independencies (CIs) among the variables or processes in a systems is a fundamental problem in scientific investigations in different fields such as biology, econometric, social sciences, and many others.
In probability theory, two events X and Y are conditionally independent given a third event Z, if the occurrence or non-occurrence of X and Y are "independent" events in their conditional probability distribution given Z (Gorodetskii 1978).There are several CI measures in literature that have been developed for different applications to capture such independency.For instance, the most commonly used one is conditional mutual information (CMI) (Gorodetskii 1978) that is an information theoretical quantity.This measure has been used in different fields such as communication engineering, channel coding (Cover and Thomas 2012), and causal discovery (Spirtes et al. 2000b).CMI between X and Y given Z is defined by comparing two conditional distributions: P(X|Y, Z) and P(X|Z) using KL-divergence and then taking average over the conditioning variable Z.Hence, it is limited to those realizations with positive probability (see Sect. 4.1).One shortcoming of such measure is that it cannot capture CIs that occur rarely or even over zero measure sets.Another shortcoming of this measure is that it is symmetric and thus it fails to encode asymmetric dependencies such as causal directions in a network.
Most of the conditional dependency or independency measures are defined similar to the CMI in a sense that they take average over the conditioning variables.Kernel-based method in Zhang et al. (2011) is another example.Consequently, such measures may fail to distinguish the range of the conditioning variable Z in which the dependency between the variables of interest X and Y is more clearer.For example, consider a treatment that has different effects on a special disease for different genders.There are scenarios in which the previous CI measures (e.g., CMI) fail to identify for which gender the effect of the treatment on the disease is maximized (see Sect. 4.3).
Discovering the causal relationships in a network is one of the main applications for CI measures (Spirtes et al. 2000b).In this area, it is important to capture the direct causal influence between two variables in a network independent of the other causal indirect influences between them.As we will show in Sect.4.2, previous CI measures (e.g., CMI) cannot capture the direct causal influences between two variables (cause and effect) in a network when some variables in the indirect causal path depend on the cause almost deterministically.
The main contribution of this paper is the introduction of a statistical metric inspired by Dobrushin's coefficient (Dobrushin 1970) to measure the dependency or independency between X and Y given Z in a network from their realizations.Our metric has been developed based on the paradigm that if Y has no dependency on X given Z, then the conditional distribution of Y given X = x and Z = z will not change if x varies and Z takes the same realization z.We will show that this dependency measure overcomes the aforementioned limitations.Moreover, we will establish the connection between our measure and the IPM to develop estimators for our metric with lower complexity compared to other relevant information-theoretic based 1 3 Behaviormetrika (2022) 49:343-362 measures such as CMI.This is because the proposed estimators depend on the sample points only through the metric of the space, and thus its complexity is independent of the dimension of the samples.
Perhaps the best known paradigm for visualizing the CIs among the variables of a network is Bayesian networks (Pearl 2003).They are directed acyclic graphs (DAGs) in which nodes represent random variables and directed edges denote the direction of causal influences.Analogously, using the dependency measure in this work, we can represent the causal structure of a network via a DAG that possesses the same properties as the Bayesian networks.
It is also worth mentioning that there exist several measures to capture CIs and the causal influences among time series, for instance, transfer entropy (Schreiber 2000) and directed information (Massey 1990).Measuring the reduction of uncertainty in one variable after knowing another variable is the key idea in such measures.Because these measures are defined based on CMI, they also suffer the aforementioned limitations.Note that the proposed measure can easily be modified to capture such influences in time series as well.

Definitions
In this Section, we review some basic definitions and our notation.Throughout this paper, we use capital letters to represent random variables, lowercase letters to denote a realization of a random variable, and bold capital letters to denote matrices.We denote a subset of random variables with index set K ⊆ [m] , where [m] ∶= {1, ..., m} by X K and [m] ⧵ {j} by −{j}.
In a directed graph �� ⃗ G = (V, � ⃗ E) , we denote the parent set of a node i ∈ V by Pa i ∶= {j ∶ (j, i) ∈ � ⃗ E} , and denote the set of its non-descendant1 by Nd i .We use to denote X and Y are independent given Z. Bayesian Network: A Bayesian network is a graphical model that represents the conditional independencies among a set of random variables via a directed acyclic graph (DAG) (Spirtes et al. 2000b).A set of random variables X is Bayesian with respect to a DAG �� ⃗ G , if Up to some technical conditions (Lauritzen 1996), this factorization is equivalent to the causal Markov condition.Causal Markov condition states that a DAG is only acceptable as a possible causal hypothesis if every node is conditionally independent of its non-descendant given its parents. (1) Corresponding DAG of a joint distribution possesses Global Markov condition if for any disjoint set of nodes A , B , and C for which A and B are d-separated2 by C , then .It is shown in Lauritzen (1996) that causal Markov condition and Global Markov condition are equivalent.
Faithfulness: A joint distribution is called faithful with respect to a DAG if all the conditional independence (CI) relationships implied by the distribution can also be found from its corresponding DAG using d-separation and vice versa 3 Judea  (2014).It is possible that several DAGs encode the same set of CI relationships.In this case, they are called Markov equivalence.

New dependency measure
As we mentioned earlier, we use the following paradigm to define our measure of independency: if Y has no dependency on X given Z, then the conditional distribution of Y given X = x and Z = z should not change when X takes different realization x ′ while Z takes the same realization z.This paradigm is similar in nature to Pearl's paradigm of causal influence (Pearl 2003).He proposed that the influence of a variable (potential cause) on another variable (effect) in a network is assessed by assigning different values to the potential cause, while other variables' effects are removed, and observing the behavior of the effect variable.Below, we formally introduce our dependency measure.
Consider X a collection of m random variables.To identify the dependency of X i on X j , we select a set of indices K , where K ⊆ −{i, j} and consider the following two probability measures: where x K∪{j} and y K∪{j} ∈ E |K|+1 are two realizations for X K∪{j} that are the same every where except at X j .Further, assume x K∪{j} at position X j equals x and y K∪{j} equals y ( y ≠ x ) at this position.If there exists a subset K ⊆ −{i, j} such that for all such realizations, i (x K∪{j} ) and i (y

K∪{j}
) are the same, then we say X i has no dependency on X j .This is analogous to the conditional independence that states if X j and X i are independent given some X K , then there is no causal influence between them.Note that using mere observational data, comparing the two conditional probabilities in (2) reveals the dependency between X i and X j .However, when interven- tional data are available, we can identify whether X j causes X i , i.e., the direction of influence.
To compare the two probability measure in (2), a metric on the space of probability measures is required.There are several metrics that can be used such as The set of distributions that do not satisfy this assumption has measure zero (Meek 1995).
Definition 1 Let (E, d) be a metrical complete and separable space equipped with the Borel field B , and let M be the space of all probability measures on (E, B) .Given 1 , 2 ∈ M , the Wasserstein metric between 1 , 2 is given by , where the infimum is taken over all probability measures on E × E such that its marginal distributions are 1 and 2 , respectively.
Using the above distance, we define the dependency of X i on X j given K ⊆ −{i, j} as follows: The supremum is over all realizations x K∪{j} and y K∪{j} that only differ at the jth variable.Moreover, we assume x K∪{j} at jth position equals x and y K∪{j} equals y ( y ≠ x ) at this position.When K = −{i, j} , c K i,j is called Dobrushin's coefficient (Dobrushin 1970).Similarly, we define the dependency of a set of nodes B on a disjoint set A given K , where K ∩ (A ∪ B) = � , as follows, Remark 1 The dependency measure of i on j given K in (3) is defined by taking supremum over all realizations.Alternatively we could have taken an average over all realizations.More precisely, we can introduce an alternative measure as follows Clearly, this expression is bounded above by (3).One caveat of taking the expectation versus the supremum is that similar to the conditional mutual information I(X i ;X j |X K ) which is also defined via taking an expectation, the measure in (5) can- not capture dependencies that occur over zero measures sets. (3) (5)

Maximum mean discrepancy
Using a special case of the duality theorem of Kantorovich and Rubinstein (Villani 2003), we obtain an alternative approach for computing the Wasserstein metric as follows: where F L is the set of all continuous functions satisfying the Lipschitz condition: This representation of the Wasserstein metric is a special form of integral probability metric (IPM) (Müller 1997) that has been studied extensively in probability theory (Dudley 2002) with applications in empirical process theory (Der Vaart and Wellner 1996), transportation problem (Villani 2003), etc. IPM is defined similar to (6) but instead of F L , the supremum is taken over a class of real-valued bounded measurable functions on E.
One particular instance of IPM is maximum mean discrepancy (MMD) in which the supremum is taken over Here, H represents a reproducing kernel Hilbert space (RKHS) (Aronszajn 1950)  with reproducing kernel k(⋅, ⋅) .MMD has been used in statistical applications such as independence testing and testing for conditional independence (Gretton et al. 2006;Fukumizu et al. 2007;Sun et al. 2007).
It is shown in Gretton et al. (2006) that when H is a universal RKHS (Micchelli  et al. 2006), defined on the compact metric space E, then MMD( 1 , 2 ) = 0 if and only if 1 = 2 .In this case, MMD can also be used to compare the two conditional distributions in (2).This is because, MMD( i (x K∪{j} ), i (y

K∪{j}
)) = 0 implies that the two conditional distributions are the same.This allows us to define a new dependency measure which we denoted it by cK i,j similar to (3) that uses MMD instead of Wasserstein distance.It is straight forward to show that this measure has similar properties as the one in (3).The main difference between these two measures is their estimation method that we discuss in Sect.5.1.

Advantages of the dependency measure
Herein, we discuss the advantages of our measure over other dependency measures in the literature.

Mutual information and information flow
Conditional mutual information is an information theoretic measure that has been used in the literature to identify the conditional independence structure of a network.This measure compares two probability measures P(X i |X j , X K ) and P(X i |X K ) using the KL-divergence as follows, This measure is symmetric and hence it cannot capture the direction of influence.Moreover, it only compares the probability measures over all pairs (X i , X j ) that have positive probability.Note that any other measures in the literature that is based on conditional independence test such as the kernel-based methods in Sun et al. (2007); Zhang et al. (2011) have the similar limitation.
Example 1 Consider a network of two variables X and Y, in which X ∼ N(0, 1) is a zero mean Gaussian variable and Y is N(0, 1) whenever X is a rational number and N(1, 2) otherwise.In this network, Y is dependent on X but it cannot be cap- tured using CI.This is because I(X;Y) = 0 .On the other hand, we have c y,x > 0 and c x,y = 0.
Another quantity that has been introduced in the literature to quantify causal influences in a network is information flow (Ay and Polani 2008).This quantity is defined using Pearl's do-calculus (Pearl 2003).Intuitively, operating do(x i ) removes the dependencies of X i on its parents, and replaces P(X i |X Pa i ) with the delta func- tion.Herein, to give an interpretation on how (3) can be used to identify causal relationships that are defined in terms of intervention, we compare our measure with information flow.
Below, we introduce the formal definition of information flow from X A to X B imposing X K , I(X A → X B |do(X K )) , where A, B, and K are three disjoint subsets of V.This is defined analogous to the conditional mutual information in (8).But unlike the conditional mutual information, the information flow is defined for all pairs (x A ;x C ) rather than being limited to those with positive probability (similar to our measure).Similar measures are introduced in Janzing et al. (2013); Nihat and Krakauer (2007) which are also based on do-calculation.Analogously, we can define our measure based on do-operation to capture the direction of causal influences in a network by substituting the conditional distributions in (2) with their do versions.More precisely, we use the following measures in (3), ( 8) .
Because the Wasserstein metric can be estimated using a linear programming (see Sect. 5.1), our measure has computational advantages over the information flow or other similar measures that uses KL-divergence.Another advantage of (3) over the information flow is that it requires less number of interventions in case of using interventional data.More precisely, calculating (9) requires at least two do-operations (do(x A∪K ) and do(x K )) but (3) requires only one (do(x K∪{j} )) .Moreover, as the next example shows, unlike our measure, the information flow depends on the underlying DAG.
Example 2 Consider a network of three binary random variables {X, Y, Z} with Z = X ⊕ Y an XOR.Suppose the underlying DAG of this network is given by Fig. 1b, in which X takes zero with probability b.In this case, , where H denotes the entropy 4 .However, if the underlying DAG is given by Fig. 1a, we have I(X → Z|do(Y)) = H( ) .Now, consider a scenario in which tends to zero.In this scenario, both DAGs describe a system in which X = Y and Z = 0 .However, in (b), we have x in both DAGs is independent of and it is positive.

A better measure for direct causal influences
Consider a network comprises of three random variables {X, Y, Z} , in which Y = f (X, W 1 ) and Z = g(X, Y, W 2 ) , such that the transformations from (X, W 1 ) to (X, Y) and from (X, Y, W 2 ) to (X, Y, Z) are invertible and W 1 and W 2 are inde- pendent exogenous noises.In other words, there exist functions and such that W 1 = (X, Y) and W 2 = (X, Y, Z) .Furthermore, f is an injective function in its first argument, i.e., if f (x 1 , w) = f (x 2 , w) for some w, then x 1 = x 2 .
To measure the direct influence from X to Z, one may compute the conditional mutual information between X and Z given Y, i.e., I(X; Z|Y).However, this is not a good measure because as the dependency of Y on X grows, i.e., H(Y|X) → 0 , then I(X;Z|Y) → 0 .This can be explained by the fact that as H(Y|X) goes to zero, in other words, as P W 1 tends to w 0 (W 1 ) for some fixed value w 0 , then by specifying the value 1 3 Behaviormetrika (2022) 49:343-362 of X, the ambiguity about the value of Y will go to zero.Thus, using the injective property of f, it is straightforward to show I(X;Z|Y) → 0 .Note that if f is not injec- tive, for fixed w 1 and y, there are several x such that y = f (x, w 1 ) .Thus, specifying the value of Y does not determine X uniquely and I(X; Z|Y) will not go to zero.This analysis shows that I(X; Z|Y) fails to capture the direct influence between X and Z when Y depends on X almost in a deterministic manner.However, looking at c y z,x , we have where P x,y (Z) ∶= P W 2 ( (x, y, Z))| g W 2 (x, y, (x, y, Z))| −1 .This distribution depends only on realizations of (X, Y) and it is independent of P X,Y .Hence, changing the dependency between X and Y will not affect c y z,x , which makes it a better candidate to measure the direct influences between variables of a network.As an illustration, we present a simple example.But first, we need the following result.All proofs are presented in the Appendix.
Theorem 1 Consider X = X + W , where has zero diagonals and its support rep- resents a DAG.W is a vector of zero mean independent random variables.Then, c Pa i ⧵{j} i,j Example 3 Consider a network of three variables {X, Y, Z} in which Y = aX + W 1 and Z = bX + cY + W 2 for some non-zero coefficients {a, b, c} and exogenous noises {W 1 , W 2 } .Hence, As we mentioned earlier, by reducing the variance of W 1 , the first term in (10) tends to H(bX + W 2 |X) = H(W 2 ) .Hence, (10) goes to zero.But, using the result of Theo- rem 1, we have c y z,x = |b| , which is independent of the variance of W 1 .

Group selection for effective intervention
Consider a network of three variables {X, Y, C} in which C is a common cause for X and Y, and X influences Y.In this network, to measure the influence of X on Y, one may consider P(Y|do(X)) that is given by ∑ c P(Y�X, c)P(c) = c [P(Y�X, c)] .See, e.g., the back-door criterion in Pearl (2003).This conditional distribution is an average over all possible realizations of the common cause C.
Consider an experiment that is been conducted on a group of people with different ages C in which the goal is to identify the effect of a treatment X on a special disease Y. Suppose that this treatment has clearer effect on that disease for elderly people and less obvious effect for younger ones.In this case, averaging the effect of the treatment on the disease for all people with different ages, i.e., P(Y|do(X)) might not reveal the true effect of the treatment.Hence, it is important to identify a regime (in this example age range) of C in which the influence of X on Y is maximized.As a consequence, we can identify the group of subjects on which the intervention is effective.
Note that this problem cannot be formalized using do-operation or other measures that take average over all possible realizations of C.However, using the measure in (3), we can formulate this problem as follows: given X = x and two differ- ent realizations for C, say c and c ′ , we obtain two conditional probabilities P(Y|x, c) and P(Y|x, c � ) .Then, we say in group C = c , the causal influence between X and Y is more obvious compare to the group C = c � , if given C = c , changing the assign- ments of X leads to larger variation of the conditional probabilities compared to changing the assignment of X given C = c � .More precisely, if c C=c y,x ≥ c C=c � y,x , where Note that c c y,x = sup c c C=c y,x , where c c y,x is given in (3).Using this new formulation, we define the range of C in which the influence from X to Y is maximized as arg max c c C=c y,x .
Example 4 Suppose that Y = CX + W 2 and X = W 1 ∕C , where C takes value from {1, ..., M} w.p. {p 1 , ..., p M } and W i ∼ N(0, 1) .In this case, we have c C=c y,x = |c| .Thus, C = M will show the influence of X on Y more clearer.On the other hand, such prop- erty cannot be detected using other measures.For example, we have I(X;Y|C = c) = 0.5 log(2), for all c.

Properties of the measure
Lemma 1 The measure defined in (3) possesses the following properties: (1) Asymmetry: Note that unlike the intersection property of the conditional independence, which does not always hold, the intersection property of the dependency measure in (3) always holds.This is due to the fact that (3) is defined for all realizations (x j , x K ) not only those with positive measure.See Example 1 for the asymmetric property of c K i,j .
(11) c C=c y,x ∶= sup 1 3 Behaviormetrika ( 2022) 49:343-362 We say a DAG possesses global Markov property with respect to (3) if for any node i and disjoint sets B , and C for which i is d-separated from B by C , we have c C i,B = c C B,i = 0 .Using the above Lemma and the results of Theorem 3.27 in Lau- ritzen (1996), it is straightforward to show that a faithful network of m random variables whose causal structure is a DAG possesses the global Markov property 5 .This property can be used to develop reconstruction algorithms (e.g., PC algorithm (Spirtes et al. 2000b)) for the causal structure of a network.

Estimation
The measure introduced in (3) can be computed explicitly for special probability measures.For instance, if the joint distribution of X is Gaussian with mean and covariance matrix Σ , then using the results of Givens and Michael (1984), we obtain , where Σ i,{j,K} denotes the sub-matrix of Σ com- prising row i and columns {j, K} , and 1 = (1, 0, ..., 0) T .Hence, in such systems, one can estimate the dependency measure by estimating the covariance matrix.However, this is not the case in general.Therefore, we introduce a non-parametric method for estimating our dependency measure using kernel method.
Given {x (1) , ..., x (N 1 ) } and {x (N 1 +1) , ..., x (N 1 +N 2 ) } that are i.i.d.samples drawn ran- domly from 1 and 2 , respectively, the estimator of ( 6) is given by Sriperumbudur et al. ( 2010), such that | i − j | ≤ d(x (i) , x (j) ), ∀i, j.In this equation, ν1 and ν2 are empirical estima- tor of 1 and 2 , respectively.The estimator of MMD is given by where y i ∶= 1∕N 1 for i ≤ N 1 and y i ∶= −1∕N 2 , elsewhere.k(⋅, ⋅) represents the ker- nel of H .It is shown in Sriperumbudur et al. ( 2010) that ( 12) converges to (6) as N 1 , N 2 → ∞ almost surely as long as the underlying metric space is totally bounded.It is important to mention that the estimator in (12) depends on {x (j) } s only through the metric d(⋅, ⋅) , and thus its complexity is independent of the dimension of x (i) , unlike the KL-divergence estimator (Qing et al. 2005).The estimator in (13) also converges to ( 7) almost surely with the rate of order O(1∕ Consider a network of m random variables X .Given N i.i.d.realizations of X , {z (1) , ..., z (N) } , where z (l) ∈ E m , we use (12) and define y i y j k(x (i) , x (j) ), 5 See Appendix for more details.such that z (l)  K∪{j} = z (k) K∪{j} off j .Similarly, one can introduce an estimator for cK i,j using (13).By applying the result of Corollary 5 in Spirtes et al. (2000a), we obtain the following result.
Corollary 1 Let (E, d) be a totally bounded metric space and a network of random variables with positive probabilities, then ĉK i,j converges to c K i,j almost surely as N goes to infinity.

Experimental results
Herein, we present two simulations in order to verify the theoretical results.In particular, the first experiment verifies the group selection advantages and the second one shows an application of the measure for capturing rare dependencies.
Group selection for: In this simulation, we considered a group of individuals ( C ∈{male,female}) to study the effect of an special treatment X on their health con- dition Y.For instance, X can denote sleep aids and Y can represent the individual's awareness level in the next morning.Most psychotropic drugs are metabolized in the liver.Because the male body breaks down Ambien and other sleep aids faster, women typically have more of the drug in their system the next morning.For this simulation, we considered a mathematical model between X, Y, and C as follows: X = N(1.5, 1) and Y = 2X + N(0, 1) , when C = female and X = N(1, 4) and Y = 3X + N(0, 9) , otherwise.
Accordingly, we generated different sample sizes N ∈ {40, ..., 1200} and esti- mated I(X; Y|c) and ĉc y,x .Figure 2 depicts the results.Since for given c, (X, Y) is jointly Gaussian, we estimated I(X; Y|c) by estimating the covariance matrix Cover and Thomas (2012), and estimated our measure using (13) with Gaussian kernels.As Fig. 2 shows, although the treatment has different effects on different genders, I(X; Y|C) cannot capture that.
Capturing rare dependencies: We simulated the following non-linear system with W i ∼ U[−1, 1] and learned its corresponding structure.
In this example, the event that X 3 is a natural number occurs rarely since the measure of natural numbers is zero in [−1, 1] .We used the estimator of MMD given in (13)  with Gaussian kernels and estimated the dependency measures.We obtained the corresponding DAG of this network given a set of observation of size N ∈ {900, 2500} .Using the results on the convergence rate of the MMD estimator, we used a threshold of order O(1∕ √ N) to distinguish positive and zero measure.Fig. 3 depicts the resulting DAGs.We also compared the performance of our measure with the kernel-based method proposed in Zhang et al. (2011).Note that in this example, since the influence of X 3 on X 5 is not detectable by mere observation, the best we can learn from mere observation is the DAG presented in Fig. 3b.This is due to the fact that the probability of X 3 being a natural number is zero and therefore, in (15) the observational data, we have X 5 = 2 √ �X 1 � + W 5 , almost surely.However, with the same number of observations, the kernel-based method identifies an extra edge, Fig. 3d.
Next, we fixed the value of X 3 to be natural number and irrational, separately and observed the outcome of the other variables for different sample sizes.Figure 3e, f depict the outcomes of the learning algorithm that uses our measure.In this case, X 3 → X 5 was identified and then the Meek rules helped to detect all the directions even the direction of X 1 − X 5 as it is shown in Fig. 3f.

Conclusion
We studied several shortcomings of the existing dependency measures in detecting direct causal influences in the literature and introduced a new statistical dependency measure to overcome them.This measure is inspired by Dobrushin's coefficients and is based on the fact that there is no dependency between two variables if and only if the conditional distribution between them remains unchanged after assigning different realizations to the variable.We presented the advantages of this measure over the related measures.By establishing the connections between our measure and the integral probability metric (IPM), we developed low complexity estimators for our measure compared to other state-ofthe-art relevant information-theoretic-based measures such as conditional mutual information.

Preliminaries
Herein, we present additional information about Wasserstein (or Kantorovich) metric and IPM.
Definition 2 Let (ℝ, d) be a metrical complete and separable space, and let M be the space of all probability measures on ℝ .If P and Q are the distribution functions of probability measures and ∈ M , respectively, the Kantorovich metric is defined by For any separable metric space, this is equivalent to 1 3 Behaviormetrika (2022) 49:343-362 By the Kantorovich-Rubinstein theorem, the Kantorovich metric is equal to the Wasserstein metric defined in 1.For an overview, see (Gibbs and Edward 2002).
Definition 3 Let (E, d) be a metrical complete and separable space.The integral probability metrics (IPM) between two measure and is defined by where F is a class of real-valued bounded measurable functions on E.
The choice function class F determines the type of IPM metric.For instance, if F is set to be the class of all continuous Lipschitz functions, then it becomes the Wasserstein metric.For MMD, E = H is a RKHS and , we obtain the total variation metric.For fur- ther details, see (Sriperumbudur et al. 2010) and the references within.

Proof of Lemma 1
• c K i,j ≥ 0 since Wasserstein is a metric.If c K i,j = 0 , we have for all realizations x j , y j and x K .Using the fact that Wasserstein is a metric on the space of probability measures, the above equality, and total probability law, we obtain The above equality holds for all y j and x K .This implies .• We show this by an example.Let X = U [0,1] to be uniformly distributed between zero and one, and where A = { i i+1 ∶ i ∈ ℕ} , and V [0,1] is a random variable independent of U that is distributed non-uniformly over [0,1].In this case, we have On the other hand, it is easy to see that Y has a uniform distribution over [0, 1] almost surely.Furthermore, for two measurable sets C and B in the -algebra, we have The last equality uses the fact that Thus, the value of Y will not affect the conditional distribution of X given Y, i.e., c x,y = 0.
• If c K i,{j,k} = 0 , W d (P(X i |x j , x k , x K ), P(X i |y j , y k , x K )) = 0, for all realization x j , y j , x k , y k , x K .By the total law, we obtain This implies that P(X i |x k , x K ) = P(X i |y j , y k , x K ) = P(X i |y k , x K ) .Hence, c K i,k = 0 .Sim- ilarly, we can prove that c K i,j = 0. • Suppose c K i,{j,k} = 0 , then from the previous proof, we have P(X i |x k , x K ) = P(X i |y k , y j , x K ) , for all realizations y j , x k , y k , x K .Thus, This is equivalent to say c K∪{j} i,k = 0 .The other part can be shown similarly.
• If c K i,j = c i,K = 0 , then from c K i,j = 0 and total probability law, we obtain that On the other hand, using the triangle inequality of the Wasserstein metric, we have The first and third expressions on the right-hand side are zero due to (16) and the second expression is zero due to c i,K = 0.
for all realizations x j , x k , and x K .Similarly, because of c K∪{j} i,k = , we have P(X i |x j , x k , x K ) = P(X i |x j , x K ) for all realizations x j , x k , and x K .Hence, for all realizations, we have P(X i |x j , x K ) = P(X i |x k , x K ).This result and the total probability law will establish the result.

The global Markov property
Since the influence structure of this network is a DAG, there exists an ordering of the variables such that for every node i, all its parents have indices less that i.Without loss of generality suppose that {X 1 , ..., X m } is ordering.Furthermore, using the chain rule, we have where X {<i} denotes all the variables with indices less than i.Due to the nature of this ordering, all the nodes in {< i} that do not belong to Pa i are non-descendants of node i.Hence, by the definition of ID, they have zero influence on X i given the parents of i and because of the first property in Lemma 1, they can be dropped from the conditioning in (17).
The global Markov property is a direct consequence of Lemma 1 and Theorem 3.27 in Lauritzen (1996).

Proof of theorem 1
To complete the proof, we need the following technical lemmas.When d(⋅, ⋅) is the Euclidean distance, we denote the Wasserstein metric by W E (⋅, ⋅).
Lemma 2 For real-valued random variables, we have where is any joint distribution of x and y such that its marginals are 1 and 2 .
Proof The lower bound is due to the dual representation of the Wasserstein metric and the fact that f (x) = x is Lipschitz.
For the upper bound, we use the Jensen's inequality, that is for p ≥ 1 .For p = 2 , we use the monotonicity of √ x , and the fact that the space of probability measures is complete and obtain the result.◻ Consider a network of variables in which every variable X i functionally depends on a subset of other variables X Fp i (the parent set of node i) as follows, where F i , G i are arbitrary functions such that G i ≠ 0 .{W i } s denote exogenous noises with mean zero.
Lemma 3 For a system described by (20), the influence of node j on its child i given the rest of i's parents Fp i ⧵ {j} under Euclidean metric, is bounded as where the supremum is taking over all realizations of X −{i} that are only different at X j .
Proof Using the lower bound Lemma 2 and the fact that W i s have zero mean, we obtain the lower bound in (21).
To obtain the upper bound, we again use the result of Lemma 2, with the following joint distribution (X i , Y i ), where and f W i denotes the probability density function of W i and denotes the indicator function.Using this joint distribution, we obtain the upper bound in (21).◻ Applying the above result to a linear system in which F i (y Fp i ) = ( x) i and G i (x Fp i ) = 1 , we obtain that c (20)

Fig. 1
Fig. 1 DAGs for which information flow fails to capture the influence Fig. 2 Estimated measures for different N

Fig. 3
Fig. 3 Recovered DAGs of the system given in (15) for different sample sizes.a, b Use the measure in (3) and pure observation.c, d Use kernel-based method and pure observation.e, f Use the measure in (3) and interventional data.f Shows the true structure.